# Node actions
The activity that a node has to handle is defined using an action. These can have various types, they can be used to specify the communication details for plugins or integrations.
Node actions allow you to incorporate **business rules** into a **process**, and send various data to be displayed in front-end applications.
The Flowx.AI platform supports the following **types of node actions**:
You can only define and add actions on the following types of **nodes**: [**send message task**](../node/message-send-received-task-node#message-send-task), [**task**](../node/task-node) and [**user task**](../node/user-task-node).
Actions fall into two categories:
* Business rules
* User interactions
### Business rules
Actions can use action rules such as DMN rules, MVEL expressions, or scripts in JavaScript, Python, or Groovy to attach business rules to a node.
For more information about supported scripting languages, click on this card.
Each button on the user interface corresponds to a manual user action.
### Action edit
Actions can be:
* Manual or automatic
* Optional or mandatory
If all the mandatory actions are not executed on a node, the flow (token) will not advance.
* Actions can also be marked as one-time or repeatable
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/actns_ovrvw.png)
### Action parameters
Action params store extra values as key/value pairs, like topics for outgoing messages or message formats for the front-end.
A decision on an **exclusive gateway** is defined using a **node rule**. Similar to action rules, these can be set using [DMN](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-dmn) or [MVEL](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-mvel).
## Configuring actions
Actions have a few characteristics that need to be set:
* an **action** can be set as **manual** or **automatic**. Manual actions can be executed only through the REST API, this usually means they are triggered by the application user from the interface. Automatic actions are executed without any need for external triggers.
* manual actions can be either mandatory or optional. Automatic actions are all considered mandatory.
* all actions have an **order.** When there are more actions on a single node, the order needs to be set.
* **repeatable** - the actions that could be triggered more than once are marked accordingly
* the actions can have a parent/child hierarchy
* **allow back to this action** - the user can navigate back to this action from a subsequent node
For more information, check the following section:
## Linking actions together
There are two ways actions could be linked together, so certain actions can be set to run immediately after others.
Certain actions can run immediately after another action by setting the `parentName` field on the action for callbacks. Callback actions are performed when a specific message is received by the Engine, indicated by the `callbacksForAction` header in the message. To run actions immediately after the parent action, set the `autoRunChildren` flag to true for the parent action.
### Child actions
A parent action has a flag `autoRunChildren`, set to `false` by default. When this flag is set to `true`, the child actions (the ones defined as mandatory and automatic) will be run immediately after the execution of the parent action is finalized.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/autorun_children.png)
### Callback actions
Child actions can be marked as callbacks to be run after a reply from an external system is received. They will need to be set when defining the interaction with the external system (the [Kafka send action](../node/message-send-received-task-node#configuring-a-message-send-task-node)).
For example, a callback function might be used to handle a user's interaction with a web page, such as upload a file. When the user performs the action, the callback function is executed, allowing the web application to respond appropriately.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/callback1.png)
Child actions can be marked as callbacks to be run after a reply from an external system is received. They will need to be set when defining the interaction with the external system (the Kafka send action).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/callback2.png)
#### Example
Callback actions are added in the **Advanced configuration** tab, in the **header** param - `callbacksForAction`.
```js
{"processInstanceId": ${processInstanceId}, "destinationId": "upload_file", "callbacksForAction": "upload_file"}
```
* `callbacksForAction` - the value of this key is a string that specifies a callback action associated with the "upload\_file" destination ID. This is part of an event-driven system (Kafka send action) where this callback will be called once the "upload\_file" action is completed.
## Scheduling actions
A useful feature for actions is having the ability to set them to run at a future time. Actions can be configured to be run after a period of time, starting from the moment the token triggered them to be executed.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/scheduled_actions.png)
# Append params to parent process
It is a type of action that allows you to send data from a subprocess to a parent process.
**Why is it important?** If you are using subprocesses that produce data that needs to be sent back to the main **process**, you can do that by using an **Append Params to Parent Process** action.
## Configuring an Append params to parent process
After you create a process designed to be used as a [subprocess](../process/subprocess), you can configure the action. To do this, you need to add an **Append Params to Parent Process** on a [**Task node**](../node/task-node) in the subprocess.
The following properties must be configured:
* [Action Edit](#action-edit)
* [Back in steps (for Manual actions)](#back-in-steps)
* [Parameters](#parameters)
* [Data to send (for Manual actions)](#data-to-send)
### Action edit
* **Name** - used internally to make a distinction between different actions on nodes in the process. We recommend defining an action naming standard to be able to quickly find the process actions
* **Order** - if multiple actions are defined on the same node, the running order should be set using this option
* **Timer expression** - it can be used if a delay is required on that action. The format used for this is [ISO 8601 duration format ](https://www.w3.org/TR/NOTE-datetime)(for example, a delay of 30 seconds will be set up as `PT30S`)
* **Action type** - should be set to **Append Params to Parent Process**
* **Trigger type** (options are Automatic/Manual) - choose if this action should be triggered automatically (when the process flow reaches this step) or manually (triggered by the user); in most use cases, this will be set to automatic
* **Required type** (options are Mandatory/Optional) - automatic actions can only be defined as mandatory. Manual actions can be defined as mandatory or optional.
* **Repeatable** - should be checked if the action can be triggered multiple times;
* **Autorun Children** - when this is switched on, the child actions (the ones defined as mandatory and automatic) will run immediately after the execution of the parent action is finalized
### Back in steps
* **Allow BACK on this action** - back in process is a functionality that allows you to go back in a business process and redo a series of previous actions in the process. For more details, check [**Moving a token backwards in a process**](../../flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process) section
### Parameters
* **Copy from current state** - data that you want to be copied back to the parent process
* **Destination in the parent state** - on what key to copy the param values
To recap: if you have a **Copy from current state** with a simple **JSON** -`{"age": 17}`, that needs to be available in the parent process, on the `application.client.age` key, you will need to set this field (**Destination in the parent state**) with `application.client`, which will be the key to append to in the parent process.
**Advanced configuration**
* **Show Target Process** - ID of the parent process where you need to copy the params, this was made available on to the `${parentProcessInstanceId}` variable, if you defined it when you [started the subprocess](./start-subprocess-action)
### Data to send
* **Keys** - are used when data is sent from the frontend via an action to validate the data (you can find more information in the [User Task configuration](../node/user-task-node) section)
**Data to send** option is configurable only when the action **trigger type** is **Manual.**
## Example
We have a subprocess that allows us to enter the age of the client on the **data.client.age** key, and we want to copy the value back to the parent process. The key to which we want to receive this value in the parent process is **application.client.age**.
This is the configuration to apply the above scenario:
**Parameters**
* **Copy from current state** - `{"client": ${data.client.age}}` to copy the age of the client (the param value we want to copy)
* **Destination in the parent state** - `application` to append the data o to the **application** key on the parent process
**Advanced configuration**
* **Show Target Process** - `${parentProcessInstanceId}`to copy the data on the parent of this subprocess
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/append_params_example.png)
# Business rules types
A business rule is an action type that allows you to configure a script on a BPMN node. It is a graphical representation used to specify business processes in a business process model.
The script can read and write the data available on the process at the moment the script is executed. For this reason, it is very important to understand what data is available on the process when the script is executed.
Business rules can be attached to a node by using actions with [**action rules**](../actions#action-rules) on them. These can be specified using [DMN rules](./dmn-business-rule-action), [MVEL](../../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-mvel) expressions, or scripts written in JavaScript, Python, or Groovy.
![Business rule action](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/business_rule_action.png)
For more information about supported scripting languages, see the next section:
You can also test your rules by using the **Test Rule** function.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/test_rule_function.png)
## Configuration
To use a Business Rules Action, follow these steps:
1. **Select a BPMN Task Node**: Choose the BPMN task node to which you want to attach the Business Rules Action. This could be a Service Task, User Task, or another task type that supports actions.
2. **Define the Action**: In the task node properties, configure the "Business Rules Action" field and select the desired language (MVEL, Java, JavaScript or Python).
3. **Write the Business Rule**: In the selected language, write the business rule or decision logic. This rule should take input data, process it, and possibly generate an output or result.
4. **Input and Output Variables**: Ensure that the task node can access the necessary input variables from the BPMN process context and store any output or result variables as needed.
5. **Execution**: When the BPMN process reaches the task node, the attached Business Rules Action is executed, and the defined business rule is evaluated.
6. **Result**: The result of the business rule execution may affect the flow of the BPMN process, update process variables, or trigger other actions based on the logic defined in the rule.
Let's take look at the following example. We have some data about the gender of a user and we need to create a business rule that computes the formal title based on the gender:
1. This is how the process instance data looks like before it reaches the business rule:
```json
{
"application" : {
"client" :
{
"firstName" : "David",
"surName" : "James",
"gender" : "M",
}
}
}
```
2. When the token reaches this node the following script (defined for the business rule) is executed. The language used here for scripting is MVEL.
```java
if (input.application.client.gender == 'F') {
output.put("application", {
"client": {
"salutation": "Ms"
}
});
} else if (input.application.client.gender == 'M') {
output.put("application", {
"client": {
"salutation": "Mr"
}
});
} else {
output.put("application", {
"client": {
"salutation": "Mx"
}
});
}
```
3. After the script is executed, the process instance data will look like this:
```json
{
"application": {
"client": {
"firstName": "David",
"surName": "James",
"gender": "M",
"salutation": "Mr"
}
}
}
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/mvel_example.gif)
## Flattened vs unflattened keys
With version [**2.5.0**](https://old-docs.flowx.ai/release-notes/v2.5.0-april-2022/) we introduced unflattened keys inside business rules. Flattened keys are now obsolete. You are notified when you need to delete and recreate a business rule so it contains an unflattened key.
![Obsolete business rule](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/obsolete_business_rule.png)
## Business rules examples
Examples available for [**v2.5.0**](https://old-docs.flowx.ai/release-notes/v2.5.0-april-2022/) version and higher
We will use the MVEL example used above to rewrite it in other scripting languages formats:
```Java
if (input.application.client.gender == 'F') {
output.put("application", {
"client": {
"salutation": "Ms"
}
});
} else if (input.application.client.gender == 'M') {
output.put("application", {
"client": {
"salutation": "Mr"
}
});
} else {
output.put("application", {
"client": {
"salutation": "Mx"
}
});
}
```
```python
if input.get("application").get("client").get("gender") == "F":
output.put("application", {
"client" : {
"salutation" : "Ms"
}
})
elif input.get("application").get("client").get("gender") == "M":
output.put("application", {
"client" : {
"salutation" : "Mr"
}
})
else:
output.put("application", {
"client" : {
"salutation" : "Mx"
}
})
```
```js
if (input.application.client.gender === 'F') {
output.application = {
client: {
salutation: 'Ms'
}
};
} else if (input.application.client.gender === 'M') {
output.application = {
client: {
salutation: 'Mr'
}
};
} else {
output.application = {
client: {
salutation: 'Mx'
}
};
}
```
```groovy
if (input.application.client.gender === 'F') {
def gender = input.application.client.gender
switch (gender) {
case 'F':
output.application = [client: [salutation: 'Ms']]
break
case 'M':
output.application = [client: [salutation: 'Mr']]
break
default:
output.application = [client: [salutation: 'Mx']]
}
```
For more detailed information on each type of Business Rule Action, refer to the following sections:
[DMN Business Rule Action](./dmn-business-rule-action)
# Configuring a DMN business rule action
Decision Model and Notation is a graphical language used to specify business decisions. DMN helps convert complex decision-making code into easily readable diagrams.
## Creating a DMN Business Rule action
To create and link a DMN **business rule** action to a task **node** in FLOWX, follow these steps:
1. Launch **FlowX Designer** and navigate to **Process Definitions** .
2. Locate and select your specific process from the list, then click on **Edit Process**.
3. Choose a **task node**, and click the **Edit** button (represented by a key icon). This action will open the node configuration menu.
4. Inside the node configuration menu, head to the **Actions** tab and click the "**+**" button to add a new action.
5. From the dropdown menu, select the action type as **Business Rule**.
6. In the **Language** dropdown menu, pick **DMN**.
For a visual guide, refer to the following recording:
![Creating a DMN Business Rule Action](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/create_dmn_business_rule_action.gif)
## Using a DMN Business Rule Action
Consider a scenario where a bank needs to perform client information tasks/actions to send salutations, similar to what was previously created using MVEL [here](./business-rule-action#business-rules-examples).
A business person or specialist can use DMN to design this business rule, without having to delve into technical definitions.
Here is an example of an **MVEL** script defined as a business rule action inside a **Service Task** node:
```java
if (input.application.client.gender == 'F') {
output.put("application", {
"client": {
"salutation": "Ms"
}
});
} else if (input.application.client.gender == 'M') {
output.put("application", {
"client": {
"salutation": "Mr"
}
});
} else {
output.put("application", {
"client": {
"salutation": "Mx"
}
});
}
```
The previous example can be easily transformed into a DMN Business Rule action represented by the decision table:
![DMN Decision Table](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/dmn_decision_ex.png)
In the example above, we used FEEL expression language to write the rules that should be met for the output to occur. FEEL defines a syntax for expressing conditions that input data should be evaluated against.
**Input** - In the example above, we used the user-selected gender from the first screen as input, bound to the `application.client.gender` key.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/dmn_screen.png)
**Output** - In the example above, we used the salutation (bound to `application.client.salutation`) computed based on the user's gender selection.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/dmn_salutation.png)
DMN also defines an XML schema that allows DMN models to be used across multiple DMN authoring platforms. The following output is the XML source of the decision table example from the previous section:
```xml
application.client.gender"M""Mr""F""Ms""O""Mx"
```
# Kafka send action
The FlowX Designer offers various options to configure the Kafka Send Action through the Actions tab at the node level.
* [Action Edit](#action-edit)
* [Parameters](#parameters)
![Kafka Send Action Configuration](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/kafka_send_action_confg.gif)
### Action Edit
* **Name** - Used internally to distinguish between different [actions](../actions/actions) within the process. Establish a clear naming convention for easy identification.
* **Order** - Sets the running order for multiple actions on the same node.
* **Timer Expression** - Enables a delay, using [ISO 8601 duration format](../node/timer-events/timer-expressions#iso-8601) (e.g., `PT30S` for a 30-second delay).
* **Action Type** - Designate as **Kafka Send Action** for sending messages to external systems.
* **Trigger Type** - Always set to Automatic.
The Kafka Send Action type is always **Automatic**. Typically, Kafka Send Actions automatically trigger when the process reaches this step.
* **Required Type** (Mandatory/Optional) - **Automatic** actions are typically set as **mandatory**. Manual actions can be either mandatory or optional.
* **Repeatable** - Allows triggering the action multiple times if required.
* **Autorun Children** - When activated, child actions (mandatory and automatic) execute immediately after the parent action concludes.
### Parameters
You can add parameters via the **Custom** option or import predefined parameters from an integration.
For detailed information on **Integrations management**, refer to [**this link**](../../platform-deep-dive/core-extensions/integration-management/).
* **Topics** - Specifies the Kafka topics listened to by the external system for requests.
* **Message** - Contains the message payload to be dispatched.
* **Advanced Configuration (Headers)** - Represents a JSON value sent within the Kafka message headers.
![Parameters](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/message_send_parameters.png)
## Kafka Send Action Scenarios
The Kafka Send action serves as a versatile tool that facilitates seamless communication across various systems and plugins, enabling efficient data transfer, robust document management, notifications, and process initiation.
This action finds application in numerous scenarios while configuring processes:
* **Communicating with External Services**
* **Interacting with Connectors** - For example, integrating a connector in the FlowX.ai Designer [here](../../platform-deep-dive/integrations/building-a-connector#integrating-a-connector-in-flowxai-designer).
* **Engaging with Plugins:**
* **Document Plugin:**
* Generating, uploading, converting, and splitting documents - Explore examples [here](../../platform-deep-dive/plugins/custom-plugins/documents-plugin/documents-plugin-overview).
* Updating/deleting documents - Find an example [here](../../platform-deep-dive/plugins/custom-plugins/documents-plugin/deleting-a-file)
* Optical Character Recognition (OCR) integration - View an example [here](../../platform-deep-dive/plugins/custom-plugins/ocr-plugin#scenario-for-flowxai-generated-documents).
* **Notification Plugin:**
* Sending notifications - Example available [here](../../platform-deep-dive/plugins/custom-plugins/notifications-plugin/sending-a-notification) and emails with attachments [here](../../platform-deep-dive/plugins/custom-plugins/notifications-plugin/sending-an-email-with-attachments).
* One-Time Password (OTP) validation - Refer to this [example](../../platform-deep-dive/plugins/custom-plugins/notifications-plugin/sending-a-notification).
* Forwarding notifications to external systems - Explore this [example](../../platform-deep-dive/plugins/custom-plugins/notifications-plugin/forwarding-notifications-to-an-external-system).
* **OCR Plugin**
* **Customer Management Plugin**
* **Task Management Plugin:**
* Bulk operations update - Find an example [here](../../platform-deep-dive/plugins/custom-plugins/task-management/task-management-overview#bulk-updates).
* **Requesting Process Data for Forwarding or Processing** - For example, Data Search [here](../../platform-deep-dive/core-extensions/search-data-service).
* **Initiating Processes** - Starting a process via Kafka or using hooks. Find examples [here](../../flowx-designer/managing-a-process-flow/starting-a-process).
The Kafka Send action stands as a versatile facilitator, enabling smooth operations in communication, document management, notifications, and process initiation across diverse systems and plugins.
# Send data to user interface
Send data to user interface action is based on Server-Sent Events (SSE), a web technology that enables servers to push real-time updates or events to clients over a single, long-lived HTTP connection. It provides a unidirectional communication channel from the server to the client, allowing the server to send updates to the client without the need for the client to continuously make requests.
**Why is it useful?** It provides real-time updates and communication between the **process** and the frontend application.
## Configuring a Send data to user interface action
Multiple options are available for this type of action and can be configured via the **FlowX Designer**. To configure a Send data to user interface, use the **Actions** tab at the [task node level](../../flowx-designer/managing-a-process-flow/adding-a-new-node), which has the following configuration options:
* [Action Edit](#action-edit)
* [Back in steps (for Manual actions)](#back-in-steps)
* [Parameters](#parameters)
* [Data to send (for Manual actions)](#data-to-send)
### Action Edit
* **Name** - used internally to make a distinction between different actions on nodes in the process. We recommend defining an action naming standard to be able to quickly find the process actions
* **Order** - if multiple actions are defined on the same node, the running order should be set using this option
* **Timer expression** - it can be used if a delay is required on that action. The format used for this is [**ISO 8601 duration format**](https://www.w3.org/TR/NOTE-datetime) (for example, a delay of 30 seconds will be set up as `PT30S`)
* **Action type** - should be set to Send data to user interface
* **Trigger type** (options are Automatic/Manual) - choose if this action should be triggered automatically (when the process flow reaches this step) or manually (triggered by the user); in most use cases, this will be set to automatic
* **Required type** (options are Mandatory/Optional) - automatic actions can only be defined as mandatory. Manual actions can be defined as mandatory or optional.
* **Repeatable** - should be checked if the action can be triggered multiple times
* **Autorun Children** - when this is switched on, the child actions (the ones defined as mandatory and automatic) will run immediately after the execution of the parent action is finalized
### Back in steps
* **Allow BACK on this action** - back in process is a functionality that allows you to go back in a business process and redo a series of previous actions in the process. For more details, check [Moving a token backwards in a process](../../flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process) section.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/websocket_action_edit.png)
### Parameters
The following fields are required for a minimum configuration of this type of action:
* **Message Type** - if you only want to send data, you can set this to **Default** (it defaults to the **data** message type)
If you need to start a new process using a **Send data to user interface**, you can do that by setting the **Message Type** to **Action** and you will need to define a **Message** with the following format:
```json
{
"processName": "demoProcess",
"type": "START_PROCESS_INHERIT",
"clientDataKeys":["webAppKeys"],
"params": {
"startCondition": "${startCondition}",
"paramsToCopy": []
}
}
```
* `paramsToCopy` - choose which of the keys from the parent process parameters to be copied to the subprocess
* `withoutParams` - choose which of the keys from the parent process parameters are to be ignored when copying parameter values from the parent process to the subprocess
* **Message** - here you define the data to be sent as a JSON object, you can use constant values and values from the process instance data.
* **Target Process** - is used to specify to what running process instance should this message be sent - **Active process** or **Parent process**
If you are defining this action on a [**Call activity node**](../node/call-subprocess-tasks/call-activity-node), you can send the message to the parent process using **Target Process: Parent process**.
### Data to send
* **Keys** - are used when data is sent from the frontend via an action to validate the data (you can find more information in the [User task configuration](../node/user-task-node) section)
**Data to send** option is configurable only when the action **trigger type** is **Manual**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/websocket_data_to_send.gif)
### Send update data example
To send the latest value from the [process instance](../process/process-instance) data found at `application.client.firstName` key, to the frontend app, you can do the following:
1. Add a **Send data to user interface**.
2. Set the **Message Type** to **Default** (this is default value for `data`).
3. Add a **Message** with the data you want to send:
* `{ "name": "${application.client.firstName}" }`
4. Choose the **Target Process**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/websocket_send_update_data.gif)
# Start integration workflow action
The Start Integration Workflow action initiates a configured workflow to enable data processing, transformation, or other tasks across connected systems.
The Start integration workflow action allows for data transfer by sending configured inputs to initiate workflows and receiving outputs at the designated result key once the workflow completes. Here’s an overview of its key functionalities:
### Triggering
When a Start integration workflow action is triggered:
* The input data mapped in Input is sent as start variables to the workflow.
* The workflow runs with these inputs.
* Workflow output data is captured on the the specified result key upon completion.
### Integratiom mapping
The Select Workflows dropdown displays:
* All workflows within the current application version.
* Any workflows referenced in the application (e.g., via the Library).
### Workflow output
To receive output data from a workflow:
* Add a Receive Message Task node to the BPMN process.
* This node ensures that output data is properly captured and processed based on the designated workflow configuration.
# Start subprocess action
A Start subprocess action is an action that allows you to start a subprocess from another (parent) process.
Using **subprocesses** is a good way to split the complexity of your business flow into multiple, simple and reusable processes.
## Configuring a Start subprocess action
To use a process as a [subprocess](../process/subprocess) you must first create it. Once the subprocess is created, you can start it from another (parent) process. To do this, you will need to add a **Start Subprocess** action to a [**User task**](../node/task-node) node in the parent process or by using a [Call activity node](../node/call-subprocess-tasks/call-activity-node).
Here are the steps to start a subprocess from a parent process:
1. First, create a [process](../process/process-definition) designed to be used as a [subprocess](../process/subprocess).
2. In the parent process, create a **user task** node where you want to start the subprocess created at step 1.
3. Add a **Start subprocess** action to the task node.
4. Configure the **Start Subprocess** action and from the dropdown list choose the subprocess created at step 1.
By following these steps, you can start a subprocess from a parent process and control its execution based on your specific use case.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/process_subprocess1.png)
The following properties must be configured for a **Start subprocess** action:
* [Action Edit](#action-edit)
* [Back in steps (for Manual actions)](#back-in-steps)
* [Parameters](#parameters)
* [Data to send (for Manual actions)](#data-to-send)
### Action edit
* **Name** - used internally to make a distinction between different actions on nodes in the process. We recommend defining an action naming standard to be able to quickly find the process actions
* **Order** - if multiple actions are defined on the same node, the running order should be set using this option
* **Timer expression** - it can be used if a delay is required on that action. The format used for this is [ISO 8601 duration format ](https://www.w3.org/TR/NOTE-datetime)(for example, a delay of 30 seconds will be set up as `PT30S`)
* **Action type** - should be set to **Start Subprocess**
* **Trigger type** (options are Automatic/Manual) - choose if this action should be triggered automatically (when the process flow reaches this step) or manually (triggered by the user); in most use cases, this will be set to automatic
* **Required type** (options are Mandatory/Optional) - automatic actions can only be defined as mandatory. Manual actions can be defined as mandatory or optional.
* **Repeatable** - should be checked if the action can be triggered multiple times
* **Autorun Children** - when this is switched on, the child actions (the ones defined as mandatory and automatic) will run immediately after the execution of the parent action is finalized
### Back in steps
* **Allow BACK on this action** - back in process is a functionality that allows you to go back in a business process and redo a series of previous actions in the process. For more details, check [**Moving a token backwards in a process**](../../flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process) section.
### Parameters
* **Subprocess name** - the name of the process that you want to start as a subprocess
* **Branch** - a dropdown menu displaying available branches on the subprocess (both opened and merged)
* **Version** - the type of version that should be used within the subprocess
* **Latest Work in Progress**:
* Displayed if the selected branch is not merged into another branch.
* This configuration is used when there is a work-in-progress (WIP) version on the selected branch or when there is no WIP version on the selected branch due to either work in progress being submitted or the branch being merged.
* In such cases, the latest available configuration on the selected branch is used.
* **Latest Submitted Work**:
* This configuration is used when there is submitted work on the selected branch, and the current branch has been submitted on another branch (latest submitted work on the selected branch is not the merged version).
* **Custom Version**:
* Displayed if the selected branch contains submitted versions.
* **Custom version** - displayed if the selected branch contains submitted versions
* **Copy from current state** - if a value is set here, it will overwrite the default behavior (of copying the whole data from the subprocess) with copying just the data that is specified (based on keys)
* **Exclude from current state** - what fields do you want to exclude when copying the data from the parent process to the subprocess (by default all data fields are copied)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/subprocess_version.png)
**Advanced configuration**
* **Show Target Process** - ID of the current process, to allow the subprocess to communicate with the parent process (which is the process where this action is configured)
### Data to send
* **Keys** - are used when data is sent from the frontend via an action to validate the data (you can find more information in the [**User task configuration**](../node/user-task-node) section)
**Data to send** option is configurable only when the action **trigger type** is **Manual**.
## Example
Let's create a main **process**, in this process we will add a user task node that will represent a menu page. In this newly added node we will add multiple subprocess actions that will represent menu items. When you select a menu item, a subprocess will run representing that particular menu item.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/subprocess_menu1.png)
To start a subprocess, we can, for example, create the following minimum configuration in a user task node (now we configure the process where we want to start a subprocess):
* **Action** - `menu_item_1` - used internally to make a distinction between different actions on nodes in the process. We recommend defining an action naming standard to be able to quickly find the process actions
* **Trigger type** - Manual; Optional
* **Repeatable** - yes
* **Subprocess** - `docs_menu_item_1` - the name of the process that you want to start as a subprocess
* **Exclude from current state** - `test.price` - copy all the data from the parent, except the price data
* **Copy from current state** - leave this field empty in order to copy all the data (except the keys that are specified in the **Exclude from current state** field), if not, add the keys from which you wish to copy the data
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/subprocess_example2.png)
**Advanced configuration**
* **Target process (parentProcessInstanceId)** - `${processInstanceId}` - current process ID
#### Result
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/subprocess_example.gif)
# Upload file action
An Upload File action is an action type that allows you to upload a file to a service available on Kafka.
**Why is it useful?** The action will receive a file from the frontend and send it to Kafka, and will also attach some metadata.
## Configuring an Upload file action
Multiple options are available for this type of action and can be configured via the **FlowX Designer**. To configure an Upload File action, use the **Actions** tab at the [task node level](../../flowx-designer/managing-a-process-flow/adding-an-action-to-a-node), which has the following configuration options:
* [Action Edit](#action-edit)
* [Back in steps (for Manual actions)](#back-in-steps)
* [Parameters](#parameters)
* [Data to send (for Manual actions)](#data-to-send)
### Action edit
* **Name** - used internally to make a distinction between different actions on nodes in the process. We recommend defining an action naming standard to be able to quickly find the process actions
* **Order** - if multiple actions are defined on the same node, the running order should be set using this option
* **Timer expression** - it can be used if a delay is required on that action. The format used for this is [ISO 8601 duration format ](https://www.w3.org/TR/NOTE-datetime)(for example, a delay of 30 seconds will be set up as `PT30S`)
* **Action type** - should be set to **Upload File**
* **Trigger type** (options are Automatic/Manual) - choose if this action should be triggered automatically (when the process flow reaches this step) or manually (triggered by the user); in most use cases, this will be set to automatic
* **Required type** (options are Mandatory/Optional) - automatic actions can only be defined as mandatory. Manual actions can be defined as mandatory or optional.
* **Repeatable** - should be checked if the action can be triggered multiple times
* **Autorun Children** - when this is switched on, the child actions (the ones defined as mandatory and automatic) will run immediately after the execution of the parent action is finalized
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/upload_file_action_edit.png)
### Back in steps
* **Allow BACK on this action** - back in process is a functionality that allows you to go back in a business process and redo a series of previous actions in the process. For more details, check [Moving a token backwards in a process](../../flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process) section.
### Parameters
* **Address** - the Kafka topic where the file will be posted
* **Document Type** - other metadata that can be set (useful for the [document plugin](../../platform-deep-dive/plugins/custom-plugins/documents-plugin/documents-plugin-overview))
* **Folder** - allows you to configure a value by which the file will be identified in the future
* **Advanced configuration (Show headers)** - this represents a JSON value that will be sent on the headers of the Kafka message
### Data to send
* **Keys** - are used when data is sent from the frontend via an action to validate the data (you can find more information in the [User Task configuration](../node/user-task-node) section)
**Data to send** option is configurable only when the action **trigger type** is **Manual**.
## Example
An example of **Upload File Action** is to send a file to the [document plugin](../../platform-deep-dive/plugins/custom-plugins/documents-plugin/documents-plugin-overview). In this case, the configuration will look like this:
**Parameters configuration**
* **Address (topicName)** - will be set to (the id of the document plugin service) `ai.flowx.in.document.persist.v1`
* **Document Type** - metadata used by the document plugin, here we will set it to`BULK`
* **Folder** - the value by which we want to identify this file in the future (here we use the **client.id** value available on the process instance data: `${application.client.id}`
**Advanced configuration**
* **Headers** - headers will send extra metadata to this topic -`{"processInstanceId": ${processInstanceId}, "destinationId": "curentNodeName"}`)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/upload_file_action_params.png)
# Call activity node
Call activity is a node that provides advanced options for starting subprocesses.
There are cases when extra functionality is needed on certain nodes to enhance process management and execution.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/call_activity1.png)
The Call Activity node contains a default action for starting a subprocess, which can be started in two modes:
* **Async mode**: The parent **process** will continue without waiting for the subprocess to finish.
Select if this task should be invoked asynchronously. Make tasks asynchronous if they cannot be executed instantaneously, for example, a task performed by an outside service.
* **Sync mode**: The parent process must wait for the subprocess to finish before advancing.
The start mode can be chosen when configuring the call activity.
If the parent process needs to wait for the subprocess to finish and retrieve results, the parent process key that will hold the results must be defined using the *output key* node configuration value.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/ca_output_key.png)
## Starting multiple subprocesses
This node type can also be used for starting a set of subprocesses that will be started and run at the same time.
This is useful when there is an array of values in the parent process parameters, and a subprocess needs to be started for each element in that array.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/ca_parent_array.png)
#### Business rule example
Below is an example of an MVEL business rule used to generate a list of shipping codes:
```java
import java.util.*;
def mapValues(shippingCode) {
return {
"shippingCode": shippingCode
}
}
shippingCodeList = [];
shippingCodeList.add(mapValues("12456"));
shippingCodeList.add(mapValues("146e3"));
shippingCodeList.add(mapValues("24356"));
shippingCodeList.add(mapValues("54356"));
output.put("shippingCodeList", shippingCodeList);
```
In this example, the shippingCodeList array contains multiple shipping code maps. Each of these maps could represent parameters for individual subprocesses. The ability to generate and handle such arrays allows the system to dynamically start and manage multiple subprocesses based on the elements in the array, enabling parallel processing of tasks or operations.
To achieve this, select the *parallel multi-instance* option. The *collection key* name from the parent process also needs to be specified.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/ca_collection.png)
When designing such a subprocess that will be started in a loop, remember that the input value for the subprocess (one of the values from the array in the parent process) will be stored in the subprocess parameter values under the key named *item*. This key should be used inside the subprocess. If this subprocess produces any results, they should be stored under a key named *result* to be sent back to the parent process.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/ca_subprocess.png)
#### Subprocess business rule example
Here's an MVEL business rule for a subprocess that processes shipping codes:
```java
import java.util.*;
map = new HashMap();
if (input.item.shippingCode.startsWith("1")) {
map.package = "Fragile";
} else {
map.package = "Non-fragile";
}
map.shippingCode = input.item.shippingCode;
output.put("result", map);
```
## Result (one of the subprocess instances)
The result shows the output of a process that has handled multiple shipping codes. The structure is:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/ca_result.png)
```json
{
"package": "Non-fragile",
"shippingCode": "54356"
}
```
This contains the result of processing the specific shipping code, indicating additional attributes related to the shipping code (e.g., package type) determined during the subprocess execution.
# Start embedded subprocess
The Start Embedded Subprocess node initiates subprocesses within a parent process, allowing for encapsulated functionality and enhanced process management.
## Overview
The Start Embedded Subprocess node enables the initiation of subprocesses within a parent process, offering a range of features and options for enhanced functionality.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/start_embedded_subprocess.png)
## Usage Considerations
### Data Management
Embedded subprocesses offer advantages such as:
* Segregated sections from a subprocess can be utilized without rendering them over the parent process.
* Data is stored within the parent process instance, eliminating the need for data transfer.
* Embedded subprocesses are visible in the navigation view.
### Runtime Considerations
**Important** runtime considerations for embedded subprocesses include:
* The **child process** must have only **one swimlane**.
* Runtime swimlane permissions are inherited from the parent.
* Certain boundary events are supported on Start Embedded Subprocess node, except for Timer events (currently not implemented).
Embedded subprocesses cannot support multiple swimlanes in the actual implementation.
## Example
Let's explore this scenario: Imagine you're creating a process that involves a series of steps, each akin to a sequential movement of a stepper. Now, among these steps, rather than configuring one step from scratch, you can seamlessly integrate a pre-existing process, treating it as a self-contained unit within the overarching process.
### Step 1: Design the Embedded Subprocess
Log in to the FlowX Designer where you create and manage process flows.
Start by creating a new process or selecting an existing process where you want to embed the subprocess.
Design your navigation areas to match your needs.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/embedded_example_copy.png)
Make sure you allocated all your user tasks into the navigation area accordingly.
Within the selected process, design the subprocess by adding necessary tasks, events and so on. Ensure that the subprocess is contained within **a single swimlane**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/subprocess_to_embed.png)
To initiate a process with an embedded subprocess, designate the root Navigation Area of the subprocess as an inheritance placeholder **Parent Process Area** label in the **Navigation Areas**.
Ensure that the navigation hierarchy within the Parent Process Area can be displayed beneath the parent navigation area within the main process interface.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/embed_sbprc.gif)
Make sure you allocated all your user tasks into the navigation area accordingly.
### Step 2: Configure Start Embedded Subprocess Node
Within the parent process, add a **Start Embedded Subprocess Node** from the node palette to initiate the embedded subprocess.
Configure the node to specify the embedded subprocess that it will initiate. This typically involves selecting the subprocess from the available subprocesses in your process repository.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/config_embed_node.png)
### Step 3: Customize Subprocess Behavior
[**Alternative flows**](../../process/navigation-areas#alternative-flows) configured in the **main process** will also be applied to **embedded subprocesses** if they share the same name.
Within the subprocess, handle data as needed. Remember that data is stored within the parent process instance when using embedded subprocesses.
Implement boundary events within the subprocess if specific actions need to be triggered based on certain conditions.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/subprocess_instance.gif)
### Step 4: Test Integration
Test the integration of the embedded subprocess within the parent process. Ensure that the subprocess initiates correctly and interacts with the parent process as expected.
Verify that data flows correctly between the parent process and the embedded subprocess. Check if any results produced by the subprocess are correctly captured by the parent process.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/final_embed_result.gif)
For further details on other ways of configuring and utilizing subprocesses, refer to the following resources:
# Error events
Error Events expand the capabilities of process modeling and error handling within BPMN processing. These Error Event nodes enhance the BPMN standard and offer improved control over error management.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/error_events.png)
## Intermediate event - error event (interrupting)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/error_event.png#center)
### Key Characteristics
1. **Boundary of an Activity node or Subprocess:**
* Error Events can only be used on the boundary of an activity, including subprocesses nodes. They cannot be placed in the normal flow of the process.
2. **Always Interrupt the Activity:**
* It's important to note that Error Events always interrupt the activity to which they are attached. There is no non-interrupting version of Error Events.
3. **Handling Thrown Errors:**
* A thrown error, represented by an Error Event, can be caught by an Error Catch Event. This is achieved specifically using an Error Boundary Event, which is placed on the boundary of the corresponding activity.
4. **Using error events on Subprocesses nodes**:
* An error catch event can be linked to a subprocess, with the error source residing within the subprocess itself, denoted by the presence of an error end event, signifying an abnormal termination of the subprocess.
### Configuring an Error Intermediate boundary event
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/error_scenario.png)
* **Name**: Assign a name to the event for easy identification.
* **Condition**: Specify the condition that triggers the error event. Various script languages can be used for defining conditions, including:
* MVEL
* JavaScript
* Python
* Groovy
To draw a sequence from an error event node and link it to other nodes, simply follow these steps: right-click on the node, and then select the option to "Add Sequence."
When crafting a condition, use a predefined key as illustrated below:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/scenario_input.gif)
For instance, in the example provided, we've taken a process key defined on a switch UI element and constructed a user-defined condition like this: `input.application.switch == true`.
* **Priority**: Determine the priority level of this error event in relation to other error events added on the same node.
When multiple error events are configured for a node, and multiple conditions simultaneously evaluate to true, only one condition can interrupt the ongoing activity and advance the token to the next node. The determination of which condition takes precedence is based on the "priority" field.
If the "priority" field is set to "null," the system will randomly select one of the active conditions to trigger the interruption.
* `input.application.switch`: This represents a key to bind a value to the Switch UI element within the "application" part of the "input". It is used in this example to capture input or configuration from a user.
* `==`: This is an equality operator, and it checks if the value on the left is equal to the value on the right.
* `true` is a boolean value, which typically represents a state of "true" or "on."
So, when you put it all together, the statement is checking if the value of the "input.application.switch" is equal to the string "true." If the value of "input.application.switch" is indeed "true" (as a string), the condition is considered true. If the value is anything other than "true," the condition is false and the error is triggered.
#### Use Case: Handling Errors during User Task Execution
**Description:** This use case pertains to a page dedicated to collecting client contact data. Specifically, it deals with scenarios where users are given the opportunity to verify their email addresses and phone numbers.
In this scenario we will create a process to throw an error if an email address is not valid.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/error_execution.png)
##### Example configuration:
1. **Error Boundary Event:** We will set an error boundary event associated with a user task.
2. **Error Node:** The node will be responsible to redirect the user to other flows after the user's email address is validated based on the conditions defined.
```java
input.application.client.contactData.email.emailAddress != "john.doe@email.com"
```
The expression checks if the email address configured in `application.client.contactData.email.emailAddress` key is not equal to "[john.doe@email.com](mailto:john.doe@email.com)." If they are not the same, it evaluates to true, indicating a mismatch.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/input_val_v1.png)
3. **Flow Control:** Depending on the outcome of the validation process, users will be directed to different flows, which may involve displaying error modals as appropriate.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/error_email.gif)
# Exclusive gateway
In the world of process flows, decisions play a crucial role, and that's where the Exclusive Gateway comes into play. This powerful tool enables you to create conditional pathways with ease.
## Configuring an Exclusive Gateway node
![Exclusive gateway](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/gateway_exclusive.png#center)
To configure this node effectively, it's essential to set up both the **input** and **output** sequences within the gateway process.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/gateway_exclusive_diagram.png)
### General configuration
* **Node name**: Give your node a meaningful name.
* **Can go back**: Enabling this option allows users to revisit this step after completing it.
When a step has "Can Go Back" set to false, all preceding steps become inaccessible.
* [**Swimlane**](../../platform-deep-dive/user-roles-management/swimlanes): Choose a swimlane, ensuring that specific user roles have access only to certain process nodes. If there are no multiple swimlanes, the value is **Default**.
* [**Stage** ](../../platform-deep-dive/plugins/custom-plugins/task-management/using-stages): Assign a stage to the node.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/gateway_exclusive_stages.png)
### Gateway decisions
* **Language**: When configuring conditions, you can use [MVEL](/4.0/docs/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-mvel) (or [DMN](#dmn-example)) expressions that evaluate to either **true** or **false**.
* **Conditions**: In the **Gateway Decisions** tab, you can see that the conditions (**if, else if, else**) are already built-in and you can **select** the destination node when the condition is **true**.
The order of expressions matters; the first **true** evaluation stops execution, and the token moves to the selected node.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/gateway_rule.png)
After the exclusive portion of the process, where one path is chosen over another, you'll need to either end each path (as in the example below) or reunite them into a single process (as in the example above) using a new exclusive gateway without any specific configuration.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/end_other_FLOW.png)
## MVEL example
### Getting input from a Switch UI element
Let's consider the following example: we want to create a process that displays 2 screens and one modal. The gateway will direct the token down a path based on whether a switch element (in our case, VAT) is toggled to true or false.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/vat_example.png)
If, during the second screen, the VAT switch is toggled on, the token will follow the second path, displaying a modal.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/vat_on.gif)
After interacting with the modal, the token will return to the main path, and the process will continue its primary flow.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/process_run_xor.png)
#### Example configuration
* **Language**: MVEL
* **Expression**:
```java
input.application.company.vat == true // you can use the same method to access a value for other supported scripts in our platform: JavaScript, Python and Groovy
```
Essentially, you are accessing a specific value or property within a structured data object. The format is usually `input.{{key from where you want to access a value}}`. In simpler terms, it's a way to verify if a particular property within your input data structure (input.application.company.vat key attached to Switch UI element) is set to the value true. If it is, the condition is met and returns true; otherwise, it returns false.
To ensure that the stored data can be accessed by the `.input method`, add a "Data to send" action on the node where you define your keys and your UI.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/config_example_xor.png)
The `application.company.vat` key corresponds to the switch UI element.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/VAT_key.png)
## DMN example
If you prefer to use [DMN](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-dmn) to define your gateway decisions, you can do so using exclusive gateways.
![Gateway Decisions](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/dmn_gif.gif)
### Getting input from a Switch UI element
**Gateway Decision - DMN example** [(Applicable only for Exclusive Gateway - XOR)](./exclusive-gateway-node)
![Gateway Decision](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/xor_dmn_decision.png)
#### Configuration example
* **Language**: DMN
* **Expression**: `application.company.vat`
In our case, the expression field will be filled in with `application.company.vat` key, which corresponds to the switch UI element.
* **Hit Policy**: Unique
* **Type**: Boolean
* **Next Node name**: Enter the name of the nodes to which you prefer the token to be directed.
### Getting input from multiple UI elements
Consider another scenario in which the process relies on user-provided information, such as age and membership status, to determine eligibility for a discount. This decision-making process utilizes a DMN (Decision Model and Notation) decision table, and depending on the input, it may either conclude or continue with other flows.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/dmn_input.gif)
#### Configuration example
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/dmn_multiple_UI_elements.png)
In our case, the expressions fields will be populated with the `application.company.vat` and `application.client.membership` keys, which correspond to the user input collected on the initial screen.
The process is visualized as follows:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/dmn_example.gif)
# Message catch boundary events
Boundary events are integral components linked to **user tasks** within a process flow. Specifically, Message Catch Boundary Events are triggered by incoming messages and can be configured as either interrupting or non-interrupting based on your requirements.
**Why is it important?** It empowers processes to actively listen for and capture designated messages during the execution of associated user tasks.
When an event is received, it progresses through the sequence from the intermediate **node**. Multiple intermediate boundary events can exist on the same user task, but only one can be activated at a time.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/message_catch_boundary_multiple.png)
Message Catch Boundary Events can be categorized by their behavior, resulting in two main classifications:
* [**Interrupting**](#message-catch-interrupting-event)
* [**Non-interrupting**](#message-catch-non-interrupting-event)
## Message catch interrupting event
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/message_catch_interrupting_event.png#center)
In the case of an Interrupting Message Catch Boundary Event triggered by a received message, it immediately interrupts the ongoing task. The associated task concludes, and the **process flow** advances based on the received message.
* **Use Cases:**
* Suitable for scenarios where the receipt of a specific message requires an immediate interruption of the current activity.
* Often used when the received message signifies a critical event that demands prompt attention.
* **Example:**
* A user task is interrupted as soon as a high-priority message is received, and the process flow moves forward to handle the critical event.
## Message catch non-interrupting event
Contrastingly, a Non-Interrupting Message Catch Boundary Event continues to listen for messages during the execution of the associated task without immediate interruption. The task persists in its execution even upon receiving messages. Multiple non-interrupting events can be activated concurrently while the task is still active, allowing the task to continue until its natural completion.
* **Use Cases:**
* Appropriate for scenarios where multiple messages need to be captured during the execution of a user task without disrupting its flow.
* Useful when the received messages are important but do not require an immediate interruption of the ongoing activity.
* **Example:**
* A user task continues its execution while simultaneously capturing and processing non-critical messages.
## Configuring a message catch interrupting/non-interrupting event
#### General config
* **Correlate with Throwing Events** - the dropdown lists all throw events from accessible process definitions
Establishes correlation between the catch event and the corresponding throw event. Selection of the relevant throw event triggers the catch event upon message propagation.
* **Correlation Key** - process key used to correlate received messages with specific process instances
The correlation key associates incoming messages with specific process instances. Upon receiving a message with a matching correlation key, the catch event is triggered.
* **Receive Data (Process Key)** - the catch event can receive and store data associated with the message in a process variable with the specified process key
This received data becomes available within the process instance, facilitating further processing or decision-making.
## Illustrating boundary events (interrupting and non-interrupting)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/boundary_multiple.png)
**Business Scenario:**
A customer initiates the account opening process. Identity verification occurs, and after successful verification, a message is thrown to signal that the account is ready for activation.
Simultaneously, the account activation process begins. If there are issues during activation, they are handled through the interruption process. The overall process ensures a streamlined account opening experience while handling potential interruptions during activation, and also addresses exceptions through the third lane.
# Message catch start event
Message Catch Start Event node represents the starting point for a process instance based on the receipt of a specific message. When this event is triggered by receiving the designated message, it initiates the execution of the associated process.
**Why it is important?** The Message Catch Start Event is important because it allows a process to be triggered and initiated based on the reception of a specific message.
## Configuring a message catch start event
A Message Catch Start Event is a special event in a process that initiates the start of a process instance upon receiving a specific message. It acts as the trigger for the process, waiting for the designated message to arrive. Once the message is received, the process instance is created and begins its execution, following the defined process flow from that point onwards. The Message Catch Start Event serves as the entry point for the process, enabling it to start based on the occurrence of the expected message.
It is mandatory that in order to use this type of node together with task management plugin, to have a service account defined in your identity solution. For more information, check our documentation in how to create service accounts using Keycloak, [**here**](../../../../setup-guides/access-management/configuring-an-iam-solution#process-engine-service-account).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/start_catch_message_event.png#center)
#### General config
* **Can go back?** - setting this to true will allow users to return to this step after completing it, when encountering a step with `canGoBack` false, all steps found behind it will become unavailable
* **Correlate with catch events** - the dropdown contains all catch messages from the process definitions accessible to the user
* **Correlation key** - is a process key that uniquely identifies the instance to which the message is sent
* **Send data** - allows the user to define a JSON structure with the data to be sent along with the message
* **Stage** - assign a stage to the node, if needed
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/message_catch_start_config.png)
## Interprocess communication with throw and message catch start events
### Throw process
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/throw_for_start.png)
#### Configuring the throw intermediate event
##### General config
* **Can go back?** - Setting this to true allows users to return to this step after completion. When encountering a step with canGoBack set to false, all steps found behind it become unavailable.
* **Correlate with catch events** - Should match the configuration in the message catch start event.
* **Correlation key** - A process key that uniquely identifies the instance to which the message is sent.
* **Send data** - Define a JSON structure with the data to be sent along with the message. In our example, we will send a test object:
```json
{"test": "docs"}
```
* **Stage** - Assign a stage to the node if needed.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/throw_for_start_config.png)
### Start with catch process
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/start_catch_message_proc.png)
#### Configuring the message catch start event
Remember, it's mandatory to have a service account defined in your identity solution to have the necessary rights to start a process using the message catch start event. Refer to our documentation on how to create service accounts using Keycloak, [**here**](../../../../setup-guides/access-management/configuring-an-iam-solution#process-engine-service-account).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/catch_start_event_config.png)
After running the throw process, the process containing the start catch message event will be triggered. The data is also sent:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/start_catch_event_response.png)
# Message Events
Message events serve as a means to incorporate messaging capabilities into business process modeling. These events are specifically designed to capture the interaction between different process participants by referencing messages.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/message_events_new.png)
By leveraging message events, processes can pause their execution until the expected messages are received, enabling effective coordination and communication between various system components.
## Intermediate events
| Trigger | Description | Marker |
| ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| Message | A Message Intermediate Event serves to send or receive messages. A filled marker denotes a "throw" event, while an unfilled marker indicates a "catch" event. This either advances the process or alters the flow for exception handling. Identifying the Participant is done by connecting the Event to a Participant through a Message Flow. | Throw ![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/throw_message_event.png#center) Catch ![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/message_catch_intermediate_event.png#center) |
## Boundary events
Boundary Events involve handling by first consuming the event occurrence. Message Catch Boundary Events, triggered by incoming messages, can be configured as either interrupting or non-interrupting.
| Trigger | Description | Marker |
| ------- | ------------------------------------------------------------------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------: |
| Message | **Non-interrupting Message Catch Event**: The event can be triggered at any time while the associated task is being performed. | Non-interrupting ![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/%20message_catch_non_interrupting.png#center) |
| Trigger | Description | Marker |
| ------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------: |
| Message | **Interrupting Message Catch Event**: The event can be triggered at any time while the associated task is being performed, interrupting the task. | Interrupting ![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/message_catch_interrupting_event.png#center) |
## Intermediate vs boundary
**Intermediate Events**
* Intermediate events temporarily halt the process instance, awaiting a message.
**Boundary Interrupting Events**
* These events can only be triggered while the token is active within the parent node.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/token_interrupting.png)
* Upon activation, the parent node concludes, and the token progresses based on the boundary flow.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/token_intterrupting_exec.png)
**Boundary Non-Interrupting Events**
* Similar to interrupting events, non-interrupting events can only be triggered while the token is active in the parent node.
* Upon triggering, the parent node remains active, and a new token is generated to execute the boundary flow concurrently.
FLOWX.AI works with the following message events nodes:
* [**Message catch start event**](./message-catch-start-event)
* [**Message intermediate events**](./message-intermediate/)
* [**Message catch boundary event**](./message-catch-boundary-event)
## Message events correlation
Messages are not sent directly to process instances. Instead, message correlation is achieved through message subscriptions, which consist of the message name and the correlation key (also referred to as the correlation value).
A correlation key is a key that can have the same value across multiple instances, and it is used to match instances based on their shared value. It is not important what the attribute's name is (even though we map based on this attribute), but rather the value itself when performing the matching between instances.
For example, in an onboarding process for a user, you hold a unique personal identification number (SSN), and someone else needs a portion of your process, specifically the value of your input (SSN).
The communication works as follows: you receive a message on a Kafka topic - `${kafka.topic.naming.prefix}.core.message.event.process${kafka.topic.naming.suffix}`. The engine listens here and writes the response.
## Message events configuration
* `attachedTo`: a property that applies to boundary events
* `messageName`: a unique name at the database level, should be the same for throw and catch events
* `correlationKey`: a process variable used to uniquely identify the instance to which the message is sent
* `data`: allows defining the JSON message body mapping as output and input
### Data example
```json
{
"document":{
"documentId": "${document.id}",
"documentUrl": "${document.url}"
}
}
```
# Intermediate message events in business processes
Business processes often involve dynamic communication and coordination between different stages or departments. Intermediate Message Events play an important role in orchestrating information exchange, ensuring effective synchronization, and enhancing the overall efficiency of these processes.
* [Throw and catch on sequence - credit card request process example](#throw-and-catch-on-sequence---credit-card-request-process-example)
* [Throw and catch - interprocess communication](#interprocess-communication-with-throw-and-catch-events)
## Throw and catch on sequence - Credit card request process example
### Business scenario
In the following example, we'll explore a credit card request process that encompasses the initiation of a customer's request, verification based on income rules, approval or rejection pathways, and communication between the client and back office.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/throw_catch_intermediate_process.png)
#### Activities
##### Default swimlane (client)
* **Start Event:** Marks the commencement of the process.
* **User Task 1:** Customer Requests New Credit Card - Involves the customer submitting a request for a new credit card.
* **Exclusive Gateway:** The process diverges based on the verification result (dependent on income rules).
* **Path A (Positive Verification):**
* **User Task 2:** Approve Credit Card Application - The bank approves the credit card application.
* **End Event:** Denotes the conclusion of the process for approved applications.
* **Path B (Negative Verification):**
* **Parallel Gateway Open:** Creates two parallel zones.
* **First Parallel Zone:**
* **User Task 3:** Reject Credit Card Application - The bank rejects the credit card application.
* **Message Throw Intermediate Event:** Signals the rejection, throwing a message to notify the back office department.
* **End Event:** Signifies the end of the process for rejected applications.
* **Second Parallel Zone:**
* **User Task 3:** Reject Credit Card Application - The bank rejects the credit card application.
* **Message Catch Intermediate Event:** The back office department is notified about the rejection.
* **Send Message Task**: A notification is sent via email to the user about the rejection.
* **End Event:** Signifies the end of the process for rejected applications.
##### Backoffice swimlane
* **Message Catch Intermediate Event:** The back office department awaits a message to proceed with credit card issuance.
* **Send Message Task:** Send Rejection Letter - Involves sending a rejection letter to the customer.
### Sequence flow
```mermaid
graph TD
subgraph Default Swimlane
StartEvent[Start Process]
UserTask1[User Task 1: Customer Requests New Credit Card]
Gateway1{Exclusive Gateway}
UserTask2[User Task 2: Approve Credit Card Application]
endA[End Event: End approved scenario]
ParallelGateway{Parallel Gateway}
UserTask3A[User Task 3: Reject Credit Card Application]
MessageThrow[Message Throw Intermediate Event: Throwing a message to notify the back office department.]
Gateway2{Close Parallel}
endC[End Event: Signifies the end of the process for rejected applications.]
end
subgraph Backoffice Swimlane
MessageCatch
SendEmailTask
end
StartEvent -->|Start Process| UserTask1
UserTask1 -->|Income Verification| Gateway1
Gateway1 -->|Positive Verification| UserTask2
UserTask2 -->|Approved| endA
Gateway1 -->|Negative Verification| ParallelGateway
ParallelGateway -->|First Parallel Zone| UserTask3A
UserTask3A -->|Credit Card Rejected| MessageThrow
MessageThrow --> Gateway2 -->|Second Parallel Zone| MessageCatch
MessageCatch --> SendEmailTask
SendEmailTask --> Gateway2
Gateway2 -->|End| endC
```
### Message flows
A message flow connects the Message Throw Intermediate Event to the Message Catch Intermediate Event, symbolizing the communication of credit card approval from the credit card approval task to the back office department.
In summary, when a customer initiates a new credit card request, the bank verifies the information. If declined, a message is thrown to notify the back office department. The Message Catch Intermediate Event in the back office awaits this message to proceed with issuing and sending the rejection letter to the customer.
### Configuring the BPMN process
To implement the illustrated BPMN process for the credit card request, follow these configuration steps:
**FLOWX.AI Designer**: Open FLOWX.AI Designer.
**Draw BPMN Diagram**: Import the provided BPMN diagram into FLOWX.AI Designer or recreate it by drawing the necessary elements.
**Customize Swimlanes**: Set up the "Default" and "Backoffice" swimlanes to represent different departments or stakeholders involved in the process. This helps visually organize and assign tasks to specific areas.
**Define User Tasks**: Specify the details for each user task. Configure User Task 1, User Task 2, and User Task 3 with appropriate screens.
* **User Task 1** - *customer\_request\_new\_credit\_card*
We will use the value from `application.income` key added on the slider UI element to create an MVEL business rule in the next step.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/user_task1_interm.gif)
* **User Task 2** - *approve\_credit\_card\_request*
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/user_task2_interm.gif)
In this screen, we configured a modal to display the approval.
* **User Task 3** - *reject\_credit\_card\_request*
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/user_task3_interm.gif)
**Configure Gateways**: Adjust the conditions for the Exclusive Gateway based on your business rules. Define the conditions for positive and negative verifications, guiding the process down the appropriate paths.
In our example, we used an MVEL rule to determine eligibility based on the income of the user. We used the `application.income` key configured in the first user task to create the rule.
![MVEL Example](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/mvel_example_gateway.png)
Also, add Parallel gateways to open/close parallel paths.
![Parallel](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/parallel_open_close.gif)
**Set Message Events**: Configure the Message Throw and Message Catch Intermediate Events in the "Default" and "Backoffice" swimlanes, respectively. Ensure that the Message Catch Intermediate Event in the "Backoffice" swimlane is set up to wait for the specific message thrown by the Message Throw event. This facilitates communication between different stages of the process.
**Define End Events**: Customize the End Events for approved and rejected applications in the "Default" swimlane. Also, set an end event in the "Backoffice" swimlane to indicate the completion of the back-office tasks.
**Configure Send Message Task**: Set up the Send Message Task in the "Backoffice" swimlane to send a rejection letter as a notification to the user.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/sending_a_notification.png)
Define the content of the rejection letter, the method of notification, and any additional details required for a seamless user experience. More details on how to configure a notification can be found in the following section:
[**Sending a notification**](../../../../platform-deep-dive/plugins/custom-plugins/notifications-plugin/sending-a-notification)
**Validate and Test**: Validate the BPMN diagram for correctness and completeness. Test the process flow by simulating different scenarios, such as positive and negative verifications.
### Configuring intermediate message events
Configuring message events is a crucial step in orchestrating effective communication and synchronization within a business process. Whether you are initiating a message throw or awaiting a specific message with a catch, the configuration process ensures information exchange between different components of the process.
In this section, we explore the essential steps and parameters involved in setting up message events to optimize your BPMN processes.
### Message throw intermediate event
A Message Throw Intermediate Event is an event in a process where a message is sent to trigger communication or action with another part of the process (can be correlated with a catch event). It represents the act of throwing a message to initiate a specific task or notification. The event creates a connection between the sending and receiving components, allowing information or instructions to be transmitted. Once the message is thrown, the process continues its flow while expecting a response or further actions from the receiving component.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/message_throw_intrmdt.png)
#### General Configuration
* **Can go back?** - Setting this to true allows users to return to this step after completing it. When encountering a step with `canGoBack` false, all steps found behind it will become unavailable.
* **Correlate with catch message events** - The dropdown contains all catch messages from the process definitions accessible to the user, in our example: `throwcatchsequenceloan`
It is imperative to define the message for the catch event first. This ensures its availability in the dropdown menu when configuring the throw intermediate event.
* **Correlation key** - This is a process key that uniquely identifies the instance to which the message is sent. In our example, we utilized the `processInstanceId` as the identifier, dynamically generated at runtime. This key is crucial for establishing a clear and distinct connection between the sender and recipient in the messaging process.
A correlation key is a key that can have the same value across multiple instances, and it is used to match instances based on their shared value. It is not important what the attribute's name is (even though we map based on this attribute), but rather the value itself when performing the matching between instances.
* **The Send data field** - This feature empowers you to define a JSON structure containing the data to be transmitted alongside the message. In our illustrative example, we utilized dynamic data originating from user input, specifically bound to a slider UI element.
```json
{"value": "${application.income}"}
```
* **Stage** - Assign a stage to the node.
In the end, this is what we have:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/message_throw_config.png)
### Message catch intermediate event
A Message Catch Intermediate Event is a type of event in a process that waits for a specific message before continuing with the process flow. It enables the process to synchronize and control the flow based on the arrival of specific messages, ensuring proper coordination between process instances.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/message_catch_intrmdt.png)
#### General Configuration
* **Can go back?** - Setting this to true allows users to return to this step after completing it. When encountering a step with `canGoBack` false, all steps found behind it will become unavailable.
* **Correlate with throwing events** - The dropdown contains all catch messages from the process definitions accessible to the user (must be the same as the one assigned in Message throw intermediate event)
* **Correlation key** - Process key used to establish a correlation between the received message and a specific process instance (must be the same as the one assigned in Message throw intermediate event).
* **Receive data** - The process key that will be used to store the data received from the throw event along with the message.
* **Stage** - Assign a stage to the node.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/message_catch_intrmdt_cfg.png)
### Testing the final result
After configuring the BPMN process and setting up all the nodes, it is crucial to thoroughly test the process to ensure its accuracy and effectiveness.
We will test the path where the user gets rejected.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/testing_the_proc_interm.gif)
In the end, the user will receive this notification via email:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/notification_received.png)
## Interprocess communication with throw and catch events
Facilitate communication between different processes by using message intermediate events.
### Business scenario
Consider a Bank Loan Approval process where the parent process initiates a loan application. During the execution, it throws a message to a subprocess responsible for additional verification.
#### Activities
**Parent Process:**
* **Start Event:** A customer initiates a loan application.
* **Start Subprocess:** Initiates a subprocess for additional verification.
* **User Task:** Basic verification steps are performed in the parent process.
* **Throw Message:** After the basic verification, a message is thrown to indicate that the loan application is ready for detailed verification.
* **End Event:** The parent process concludes.
**Subprocess:**
* **Start Event:** The subprocess is triggered by the message thrown from the parent process.
* **Catch Message:** The subprocess catches the message, indicating that the loan application is ready for detailed verification.
* *(Perform Detailed Verification and Analysis)*
* **End Event:** The subprocess concludes.
### Sequence flow
```mermaid
graph TD
subgraph Parent Process
a[Start]
b[Start Subprocess]
c[Throw message to another process]
d[User Task]
e[End]
end
subgraph Subprocess
f[Start]
g[Catch event in subprocess]
h[End]
end
a --> b --> c --> d --> e
f --> g --> h
c --> g
```
### Message flows
* The parent subprocess triggers the subprocess run node, initiating the child process.
* Within the child process, a message catch event waits for and processes the message thrown by the parent subprocess.
### Configuring the parent process (throw event)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/throw_and_catch_subpr.png)
Open **FLOWX.AI Designer** and create a new process.
Add a User Task for user input:
* Within the designer interface, add a "User Task" element to collect user input. Configure the user task to capture the necessary information that will be sent along with the message.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/user_task_loan_subrp.gif)
Integrate a [**Call activity**](../../../node/call-subprocess-tasks/call-activity-node.mdx) node:
* Add a "Subprocess Run Node" and configure it:
* **Start Async** - Enable this option. When subprocesses are initiated in sync mode, they notify the parent process upon completion. The parent process then manages the reception of process data from the child and resumes its flow accordingly
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/subprocess_run_node_cfg.png)
* Add and configure the "Start Subprocess" action:
* **Parameters**:
* **Subprocess name** - Specify the name of the process containing the catch message event.
* **Branch** - Choose the desired branch from the dropdown menu.
* **Version** - Indicate the type of version to be utilized within the subprocess.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/throw_subprocess_configuration.png)
For a more comprehensive guide on configuring the "Start Subprocess" action, refer to the following section:
[Start Subprocess action](../../../../building-blocks/actions/start-subprocess-action)
Insert a Throw Event for Message Initiation:
* Add a "Throw Event" to the canvas, indicating the initiation of a message.
* Configure the throw message intermediate event node:
* **Correlate with catch message events** - The dropdown contains all catch messages from the process definitions accessible to the user, in our example: `throwcatchDocs`
* **Correlation key** - This is a process key that uniquely identifies the instance to which the message is sent. In our example, we utilized the `processInstanceId` as the identifier, dynamically generated at runtime. This key is crucial for establishing a clear and distinct connection between the sender and recipient in the messaging process.
* **The Send data field** - This feature empowers you to define a JSON structure containing the data to be transmitted alongside the message. In our illustrative example, we utilized dynamic data originating from user input, specifically bound to some slider UI elements.
```json
{
"client_details": {
"clientIncome": ${application.income},
"loanAmount": ${application.loan.amount},
"loanTerm": ${application.loan.term}
}
}
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/throw_in_anthr_proc.png)
### Configuring the subprocess (catch event)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/catch_from_anthr_proc.png)
Insert an Intermediate Message Catch event and configure it:
* **Correlate with throwing events** - Utilize the same correlation settings added for the associated throw message event.
* **Correlation Key** - Set the correlation key to the parent process instance ID, identified as `parentProcessInstanceId`.
* **Receive data** - Specify the process key that will store the data received from the throw event along with the corresponding message.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/catch_interm_from_anthr_prc.png)
Integrate and Fine-Tune a Service Task for Additional Verification.
* Incorporate a service task to execute the additional verification process. Tailor the configuration to align with your preferred method of conducting supplementary checks.
### Throw with multiple catch events
Download the example
# Message catch intermediate event
A Message Catch Intermediate Event is a type of event in a process that waits for a specific message before continuing with the process flow.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/message_catch_intermediate_event.png#center)
A Message Catch intermediate event is a type of event in a process that waits for a specific message before continuing with the process flow.
**Why it is important?** It enables the process to synchronize and control the flow based on the arrival of specific messages, ensuring proper coordination between process instances.
Similar to the message catch boundary event, the message catch intermediate event is important because it facilitates the communication and coordination between process instances through messages. By incorporating this event, the process can effectively synchronize and control the flow based on the arrival of specific messages.
Message Catch Intermediate Event can be used as a standalone node, this means that it will block a process until it receives an event.
## Configuring a message catch intermediate event
Imagine a process where multiple tasks are executed in sequence, but the execution of a particular task depends on the arrival of a certain message. By incorporating a message catch intermediate event after the preceding task, the process will pause until the expected message is received. This ensures that the subsequent task is not executed prematurely and allows for the synchronization of events within the process.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/message_catch_intrmdt.png)
#### General config
* **Can go back?** - setting this to true will allow users to return to this step after completing it, when encountering a step with `canGoBack` false, all steps found behind it will become unavailable
* **Correlate with throwing events** - the dropdown contains all catch messages from the process definitions accessible to the user
* **Correlation key** - process key used to establish a correlation between the received message and a specific process instance
* **Receive data** - the process key that will be used to store the data received along with the message
* **Stage** - assign a stage to the node
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/message_catch_intrmdt_cfg.png)
# Overview
An intermediate event is an occurrence situated between a start and an end event in a process or system. It is represented by a circle with a double line. This event can either catch or throw information, and the directional flow is indicated by connecting objects, determining whether the event is catching or throwing information.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/intermediate_message_events.png)
### Message throw intermediate event
This event throws a message and continues with the process flow.
It enables the sending of a message to a unique destination.
### Message catch intermediate event
This event waits for a message to be caught before continuing with the process flow.
# Message throw intermediate event
Using a Throw intermediate event is like throwing a message to tell someone about something. After throwing the message, the process keeps going, and other parts of the process can listen to that message.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/throw_message_event.png#center)
**Why it is important?** The Message Throw Intermediate Event is important because it allows different parts of a process to communicate and share information with each other.
## Configuring a message throw intermediate event
A Message throw intermediate event is an event in a process where a message is sent to trigger a communication or action with another part of the process (can be correlated with a catch event). It represents the act of throwing a message to initiate a specific task or notification. The event creates a connection between the sending and receiving components, allowing information or instructions to be transmitted. Once the message is thrown, the process continues its flow while expecting a response or further actions from the receiving component.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/message_throw_intrmdt.png)
#### General config
* **Can go back?** - setting this to true will allow users to return to this step after completing it, when encountering a step with `canGoBack` false, all steps found behind it will become unavailable
* **Correlate with catch events** - the dropdown contains all catch messages from the process definitions accessible to the user
* **Correlation key** - is a process key that uniquely identifies the instance to which the message is sent
* **The data field** - allows the user to define a JSON structure with the data to be sent along with the message
* **Stage** - assign a stage to the node
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/message_throw_config.png)
# Send message/receive message tasks
Send message task and Receive message task nodes are used to handle the interaction between a running process and external systems. This is done using Kafka.
## Send message task
This node is used to configure messages that should be sent to external systems.
![Send Message Task](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/send-task_node.svg#center)
### Configuring a send message task
Node configuration is done by accessing the **Node Config** tab. You have the following configuration options for a send message task:
#### General configuration
Inside the **General Config** tab, you have the following properties:
* **Node Name** - the name of the node
* **Can Go Back** - switching this option to true will allow users to return to this step after completing it
When encountering a step with `canGoBack` switched to false, all steps found behind it will become unavailable.
* [**Swimlane**](../../platform-deep-dive/user-roles-management/swimlanes) - choose a swimlane (if there are multiple swimlanes on the process) to ensure only certain user roles have access to certain process nodes; if there are no multiple swimlanes, the value is **Default**
* [**Stage**](../../platform-deep-dive/plugins/custom-plugins/task-management/using-stages) - assign a stage to the node
![General Config](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/message_send_task_action.png)
To configure a send message task, we first need to add a new node and then configure an **action** (**Kafka Send Action** type):
1. Open **Process Designer** and start configuring a process.
2. Add a **send message task** node.
3. Select the **send message task** node and open the **Node Configuration**.
4. Add an **action** , the type of the action set to **Kafka Send Action**.
5. A few action parameters will need to be filled in depending on the selected action type.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/kafka_send_task.gif)
Multiple options are available for this type of action and can be configured via the FLOWX.AI Designer. To configure and [add an action to a node](../../flowx-designer/managing-a-process-flow/adding-an-action-to-a-node), use the **Actions** tab at the node level, which has the following configuration options:
* [Action Edit](#action-edit)
* [Back in steps (for Manual actions)](#back-in-steps)
* [Parameters](#parameters)
* [Data to send (for Manual actions)](#data-to-send)
#### Action Edit
* **Name** - used internally to make a distinction between different [actions](../actions/actions) on nodes in the process. We recommend defining an action naming standard to easily find the process actions
* **Order** - if multiple actions are defined on the same node, set the running order using this option
* **Timer Expression** - it can be used if a delay is required on that action. The format used for this is [ISO 8601 duration format](./timer-events/timer-expressions#iso-8601) (for example, a delay of 30 seconds will be set up as `PT30S`)
* **Action Type** - should be set to **Kafka Send Action** for actions used to send messages to external systems
* **Trigger Type** (options are Automatic/Manual) - choose if this action should be triggered automatically (when the process flow reaches this step) or manually (triggered by the user); in most use cases, this will be set to automatic
* **Required Type** (options are Mandatory/Optional) - automatic actions can only be defined as mandatory. Manual actions can be defined as mandatory or optional.
* **Repeatable** - should be checked if the action can be triggered multiple times
* **Autorun Children** - when this is switched on, the child actions (the ones defined as mandatory and automatic) will run immediately after the execution of the parent action is finalized
#### Back in steps
* **Allow BACK on this action** - back in the process is a functionality that allows you to go back in a business process and redo a series of previous actions in the process, or for more details, check [**Moving a Token Backwards in a Process**](../../flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process) section
![Action Edit](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/message_send_action_edit.png)
#### Data to Send
* **Keys** - are used when data is sent from the frontend via an action to validate the data (you can find more information in the [User Task Configuration](./user-task-node) section)
You can configure **Data to Send** option only when the action **trigger type** is **Manual**.
![Parameters](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/parameters_message_send.gif)
For more information about Kafka, check the following sections:
### Example of a send message task usage
Send a message to a CRM integration to request a search in the local database:
#### Action Edit
* **Name** - pick a name that makes it easy to figure out what this action does, for example, `sendRequestToSearchClient`
* **Order** - 1
* **Timer Expression** - this remains empty if we want the action to be triggered as soon as the token reaches this node
* **Action Type** - Kafka Send Action
* **Trigger Type** - *Automatic* - to trigger this action automatically
* **Required Type** - *Mandatory* - to make sure this action will be run before advancing to the next node
* **Repeatable** - false, it only needs to run once
#### Parameters
Parameters can be added either using the **Custom** option (where you configure everything on the spot) or by using **From Integration** and import parameters already defined in an integration.
More details about **Integrations Management** you can find [here](../../platform-deep-dive/core-extensions/integration-management/integration-management-overview).
##### Custom
* **Topics** - `ai.flowx.in.crm.search.v1` the Kafka topic on which the CRM listens for requests
* **Message** - `{ "clientType": "${application.client.clientType}", "personalNumber": "${personalNumber.client.personalNumber}" }` - the message payload will have two keys, `clientType` and `personalNumber`, both with values from the process instance
* **Headers** - `{"processInstanceId": ${processInstanceId}}`
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/message_send_param1.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/message_send_param2.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/message_send_param3.png)
## Receive Message Task
This type of node is used when we need to wait for a reply from an external system.
![Receive Message Task](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/receive-task%20_node.svg#center)
The reply from the external system will be saved in the process instance values, on a specified key. If the message needs to be processed at a later time, a timeout can be set using the [ISO 8601](./timer-events/timer-expressions) format.
For example, let's think about a CRM microservice that waits to receive requests to look for a user in a database. It will send back the response when a topic is configured to listen for the response.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/kafka_receive_message.png)
### Configuring a Receive Message Task
The values you need to configure for this node are the following:
* **Topic Name** - the topic name where the [process engine](../../platform-deep-dive/core-components/flowx-engine) listens for the response (this should be added to the platform and match the topic naming rule for the engine to listen to it) - `ai.flowx.out.crm.search.v1`
A naming pattern must be defined on the process engine to use the defined topics. It is important to know that all the events that start with a configured pattern will be consumed by the Engine. For example, `KAFKA_TOPIC_PATTERN` is the topic name pattern that the Engine listens to for incoming Kafka events.
* **Key Name** - will hold the result received from the external system; if the key already exists in the process values, it will be overwritten - `crmResponse`
For more information about Kafka configuration, click [here](../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka).
![Example of a Receive Message Task for a CRM integration](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/message_receive_kafka_ex.png)
#### From integration
After defining one integration (inside [Integration Management](../../platform-deep-dive/core-extensions/integration-management/)), you can open a compatible node and start using already defined integrations.
* **Topics** - topics defined in your integration
* **Message** - the **Message Data Model** from your integration
* **Headers** - all integrations have `processInstanceId` as a default header parameter; add any other relevant parameters
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/message_send_from_integr.gif)
# BPMN nodes
A Business Process Model and Notation (BPMN) node is a visual representation of a point in your process. Nodes are added at specific process points to denote the entrance or transition of a record within the process.
For a comprehensive understanding of BPMN, start with the following section:
[**Intro to BPMN**](/4.0/docs/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/BPMN%20nodes.png)
FLOWX platforms support various node types, each requiring distinct configurations to fulfill its role in the business flow.
## Types of BPMN nodes
Let's explore the key types of BPMN nodes available in FlowX:
* **Start and End nodes** - Mark the initiation and conclusion of a [process flow](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn). A [process definition](../process/process-definition) may have multiple start nodes (each linked with a start condition) and end nodes based on the flow outcomes.
* **Send Message and Receive Message tasks** - Used for communication with external systems, integrations, and plugins.
* **Message Events** - Capture interactions between different process participants by referencing messages.
* **Task** nodes - Added when a [business rule](../actions/business-rule-action/business-rule-action) needs to execute during a process flow.
* **User Task** nodes - Configure the appearance and behavior of the UI and send data to custom components.
* **Exclusive Gateways** - Mark decision points in the process flow, determining the branch to be followed.
* **Parallel Gateways** - Split the process flow into two or more [branches](../../flowx-designer/managing-a-process-flow/adding-more-flow-branches) occurring simultaneously.
* **Call Subprocess Tasks**:
* **Call Activity** - Call activity is a node that provides advanced options for starting **subprocesses**.
* **Start Embedded Subprocess** - The Start Embedded Subprocess node initiates subprocesses within a parent process, allowing for encapsulated functionality and enhanced process management.
For comprehensive insights into BPMN and its various node types, explore our course at FlowX Academy:
* What's BPMN (Business Process Model Notation) and how does it work?
* How is BPMN used in FlowX?
After gaining a comprehensive overview of each node, you can experiment with them to create a process. More details are available in the following section:
[Managing a process flow](../../flowx-designer/managing-a-process-flow/)
# Parallel gateway
When you have multiple operations that can be executed concurrently, the Parallel Gateway becomes a valuable tool. This type of node creates a parallel section within the process, particularly useful for tasks that can run independently without waiting for each other. It's essential to close each parallel section with another Parallel Gateway node.
## Configuring parallel paths
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/gateway_parallel.png#center)
This node requires no special configuration and can initiate two or more parallel paths. It's important to note that the closing Parallel node, which is required to conclude the parallel section, will wait for all branches to complete before advancing to the next node.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/parallel_gateways.png)
### Configuration example
Let's consider a scenario involving a Paid Time Off (PTO) request. We have two distinct flows: one for the HR department and another for the manager. Initially, two tokens are generated—one for each parallel path. A third token is created when both parallel paths converge at the closing parallel gateway.
In the HR flow, in our example, the request is automatically approved.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/hr_flow.gif)
Now, we await the second flow, which requires user input.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/manager_flow.gif)
After the tokens from the parallel paths have completed their execution, a third token initiates from the closing parallel gateway.
# Start/end nodes
Let's go through all the options for configuring start and end nodes for a process definition.
## Start node
The start node represents the beginning of a process and it is mandatory to add one when creating a process.
![Start node](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/start_node.png#center)
A process can have one or more start nodes. If you defined multiple start nodes, each should have a start condition value configured. When starting a new process instance the desired start condition should be used.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/start_node_example.png)
### Configuring a start node
Node configuration is done by accessing the **Node Config** tab. You have the following configuration options for a **start node**:
* [General Config](#general-config)
* [Start condition](#start-condition)
#### General Config
* **Node name** - the name of the node
* **Can go back** - switching this option to true will allow users to return to this step after completing it
When encountering a step with `canGoBack` switched to false, all steps found behind it will become unavailable.
* [**Swimlane**](../../platform-deep-dive/user-roles-management/swimlanes) - choose a swimlane (if there are multiple swimlanes on the process) to make sure only certain user roles have access to certain process nodes- if there are no multiple swimlanes, the value is **Default**
* [**Stage** ](../../platform-deep-dive/plugins/custom-plugins/task-management/using-stages)- assign a stage to the node
#### Start condition
The start condition should be set as a string value. This string value will need to be set on the payload for the start process request on the `startCondition` key.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/start_node_condition.png)
To test the start condition, we can send a start request via REST:
```
POST {{processUrl}}/api/process/{{processName}}/start
{
"startCondition": "PF"
}
```
#### Error handling on start condition
If a request is made to start a process with a start condition that does not match any start node, an error will be generated. Let's take the previous example and assume we send an incorrect value for the start condition:
```
POST {{processUrl}}/api/process/{{processName}}/start
{
"startCondition": "PJ"
}
```
A response with the error code `bad request` and title `Start node for process definition not found`will be sent in this case:
```json
{
"entityName": "ai.flowx.process.definition.domain.NodeDefinition",
"defaultMessage": "Start node for process definition not found.",
"errorKey": "error.validation.process_instance.start_node_for_process_def_missing",
"type": "https://www.jhipster.tech/problem/problem-with-message",
"title": "Start node for process definition not found.",
"status": 400,
"message": "error.validation.process_instance.start_node_for_process_def_missing",
"params": "ai.flowx.process.definition.domain.NodeDefinition"
}
```
## End node
![End Event](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/end-event.png#center)
An end node is used to mark where the process finishes. When the process reaches this node, the process is considered completed and its status will be set to `Finished`.
### Configuring an end node
Multiple end nodes can be used to show different end states. The configuration is similar to the start node.
![End node](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/end_node.png)
# Task node
A task node refers to a task that utilizes various services, such as Web services, automated applications, or other similar services, to accomplish a particular task.
![Task node](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/service_task.png#center)
This type of node finds application in multiple scenarios, including:
* Executing a [**business rule**](../actions/business-rule-action/) on the process instance data
* Initiating a [**subprocess**](../actions/start-subprocess-action)
* Transferring data from a subprocess to the parent process
* Transmitting data to frontend applications
## Configuring task nodes
One or more actions can be configured on a task node. The actions are executed in the configured order.
Node configuration is done by accessing the **Node Config** tab. You have the following configuration options for a task node:
#### General Config
* **Node name** - the name of the node
* **Can go back** - switching this option to true will allow users to return to this step after completing it
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/task_node_general_config.png)
When encountering a step with `canGoBack` switched to false, all steps found behind it will become unavailable.
* [**Swimlane**](../../platform-deep-dive/user-roles-management/swimlanes) - choose a swimlane (if there are multiple swimlanes on the process) to make sure only certain user roles have access to certain process nodes- if there are no multiple swimlanes, the value is **Default**
* [**Stage** ](../../platform-deep-dive/plugins/custom-plugins/task-management/using-stages)- assign a stage to the node
#### Response Timeout
* **Response timeout** - can be triggered if, for example, a topic that you define and add in the [Data stream topics](#data-stream-topics) tab does not respect the pattern, the format used for this is [ISO 8601 duration format](https://www.w3.org/TR/NOTE-datetime)(for example, a delay of 30s will be set up like `PT30S`)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/task_node_response_timeout.png)
#### Data stream topics
* **Topic Name** - the topic name where the [process engine](../../platform-deep-dive/core-components/flowx-engine) listens for the response (this should be added to the platform and match the topic naming rule for the engine to listen to it) - available for UPDATES topics (Kafka receive events)
A naming pattern must be defined on the [process engine configuration](../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka) to use the defined topics. It is important to know that all the events that start with a configured pattern will be consumed by the Engine. For example, `KAFKA_TOPIC_PATTERN` is the topic name pattern where the Engine listens for incoming Kafka events.
* **Key Name** - will hold the result received from the external system, if the key already exists in the process values, it will be overwritten
#### Task Management
* **Update task management** - force [Task Manager Plugin](../../platform-deep-dive/plugins/custom-plugins/task-management/task-management-overview) to update information about this process after this node
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/task_node_task_management.png)
## Configuring task nodes actions
Multiple options are available when configuring an action on a task node. To configure and add an action to a node, use the **Actions** tab at the node level, which has the following configuration options:
* [Action Edit](#action-edit)
* [Parameters](#parameters)
#### Action Edit
Depending on the type of the [**action**](../actions/actions), different properties are available, let's take a [**business rule**](../actions/business-rule-action/business-rule-action) as an example.
1. **Name** - used internally to differentiate between different actions on nodes in the process. We recommend defining an action naming standard to be able to quickly find the process actions.
2. **Order** - if multiple actions are defined on the same node, their running order should be set using this option
3. **Timer Expression** - can be used if a delay is required on that action. The format used for this is [ISO 8601 duration format ](https://www.w3.org/TR/NOTE-datetime)(for example, a delay of 30s will be set up like `PT30S`)
4. **Action type** - defines the appropriate action type
5. **Trigger type** - (options are Automatic/Manual) - choose if this action should be triggered automatically (when the process flow reaches this step) or manually (triggered by the user); In most use cases, this will be set to automatic.
6. **Required type** - (options are Mandatory/Optional) - automatic actions can only be defined as mandatory. Manual actions can be defined as mandatory or optional.
7. **Repeatable** - should be checked if the action can be triggered multiple times
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/task_node_action_edit.png)
#### Parameters
Depending on the type of the [**action**](../actions/actions), different properties are available. We refer to a **Business rule** as an example
1. **Business Rules** - business rules can be attached to a node by using actions with action rules on them, these can be specified using [DMN rules](../actions/business-rule-action/dmn-business-rule-action), [MVEL](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-mvel) expressions, or scripts written in Javascript, Python, or Groovy.
### Business Rule action
A [business rule](../actions/business-rule-action/business-rule-action) is a Task action that allows a script to run. For now, the following script languages are supported:
* [**MVEL**](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-mvel)
* **JavaScript (Nashorn)**
* **Python (Jython)**
* **Groovy**
* [**DMN**](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-dmn) - more details about a DMN business rule configuration can be found [here](../actions/business-rule-action/dmn-business-rule-action)
For more details on how to configure a Business Rule action, check the following section:
Being an event-driven platform FLOWX uses web socket communication in order to push events from the frontend application.
For more details on how to configure a Send data to user interface action, check the following section:
Upload file action will be used to upload a file from the frontend application and send it via a Kafka topic to the document management system.
For more details on how to configure an Upload File action, check the following section:
In order to create reusability between business processes, as well as split complex processes into smaller, easier-to-maintain flows, the start subprocess business rule can be used to trigger the same sequence multiple times.
For more details on how to configure a Business Rule action, check the following section:
Used for copying data in the subprocess from its parent process.
For more details about the configuration, check the following section:
# Timer boundary event
A Timer Boundary Event is a type of event in Business Process Model and Notation (BPMN) that is associated with a specific task or subprocess within a process. It triggers when a predetermined time duration or a specific date is reached while the associated task or subprocess is in progress.
Timer Boundary Events are utilized to incorporate time-related conditions into processes, enabling actions to be taken at specified time intervals, deadlines, or specific dates. This capability is especially valuable for scenarios where time-sensitive actions or notifications need to be integrated seamlessly within process flows.
## Timer boundary event - interrupting
A Timer Boundary Event is an event attached to a specific activity (task or subprocess) that is triggered when a specified time duration or date is reached. It can interrupt the ongoing activity and initiate a transition.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/timer_boundary_event_interrupting.svg#center)
### Configuration
For Timer Boundary Events - Interrupting, the following values can be configured:
| Field | Validations | Accepted Values |
| ---------- | ----------- | -------------------------------- |
| Definition | Mandatory | ISO 8601 formats (date/duration) |
| | | Process param |
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/intermediate_timer_event.png)
### General rules
* When the token enters the parent activity, a scheduler is set, and it waits for the timer event to be triggered.
* When the timer is triggered, the ongoing activity is terminated, and the process continues with the defined transition.
## Timer boundary event - non-interrupting
A Timer Boundary Event is an event attached to a specific activity (task or subprocess) that is triggered when a specified time duration or date is reached. It can trigger independently of the ongoing activity and initiate a parallel path.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/timer_boundary_event_non_interrupting.svg#center)
### Configuration
For Timer Boundary Events - Non-Interrupting, the following values can be configured:
| Field | Validations | Accepted Values |
| ---------- | ----------- | -------------------------------- |
| Definition | Mandatory | ISO 8601 formats (date/duration) |
| | | Process param |
### General rules
* When a token arrives at a node with a Timer Boundary Event - Non-Interrupting associated:
* A trigger is scheduled, but the current token execution remains unaffected.
* When the token enters the parent activity, a scheduler is set, and it waits for the timer event to be triggered.
* If the timer is a cycle, it is rescheduled for the specified number of repetitions.
* The scheduler is canceled if the token leaves the activity before it is triggered.
# Timer events
Timer event nodes are a powerful feature in BPMN that allow you to introduce time-based behavior into your processes. These nodes enable you to trigger specific actions or events at predefined time intervals, durations, or cycles. With timer event nodes, you can design processes that respond to time-related conditions, ensuring smoother workflow execution and enhanced automation.
There are three primary types of timer event nodes:
* **Timer Start Event (interrupting/non-interrupting)**: This node initiates a process instance at a scheduled time, either interrupting or non-interrupting ongoing processes. It allows you to set a specific date, duration, or cycle for the process to start. You can configure it to trigger a process instance just once or repeatedly.
* **Timer Intermediate Event** (interrupting): This node introduces time-based triggers within a process. It's used to pause the process execution until a specified time duration or date is reached. Once triggered, the process continues its execution.
* **Timer Boundary Event (interrupting/non-interrupting)**: Attached to a task or subprocess, this node monitors the passage of time while the task is being executed. When the predefined time condition is met, the boundary event triggers an associated action, interrupting or non-interrupting the ongoing task or subprocess.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/timer_events.png)
## Timers
Timers introduce the ability to trigger events at specific time intervals. They can be configured in three different ways: as a date, a duration, or a cycle. These configurations can use static values or dynamic/computed values.
* **Date**: Events triggered on a specific date and time.
* Format: ISO 8601 (e.g., `2019-10-01T12:00:00Z` or `2019-10-02T08:09:40+02:00`)
* **Time Duration**: Events triggered after a specified duration.
* Format: ISO 8601 duration expression (`P(n)Y(n)M(n)DT(n)H(n)M(n)S`)
* P: Duration designator
* Y: Years
* M: Months
* D: Days
* T: Time designator
* H: Hours
* M: Minutes
* S: Seconds
Examples:
* `PT15S` - 15 seconds
* `PT1H30M` - 1 hour and 30 minutes
* `P14D` - 14 days
* `P3Y6M4DT12H30M5S` - 3 years, 6 months, 4 days, 12 hours, 30 minutes, and 5 seconds
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/timer_events_duration_date.gif)
* **Time Cycle** (available for Timer Start Event): Events triggered at repeating intervals.
* Option 1: ISO 8601 repeating intervals format (`R`)
* Examples:
* `R5/2023-08-29T15:30:00Z/PT2H`: Every 2 hours seconds, up to five times, starting with 29 August, 15:30 UTC time
* `R/P1D`: Every day, infinitely
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/timer_start_cycle.gif)
* Option 2: Using cron expressions
* Example: `0 0 9-17 * * MON-FRI`: Every hour on the hour from 9 a.m. to 5 p.m. UTC, Monday to Friday
Important: Only Spring cron expressions are permissible for configuration. Refer to the [**official documentation**](https://docs.spring.io/spring-framework/4.0/docs/current/javadoc-api/org/springframework/scheduling/support/CronExpression.html) for detailed information on configuring Spring Cron expressions.
Scheduled timer events are clearly indicated within the process definition list, as illustrated in the following example:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/process_with_scheduled_timer.png)
To manage timers efficiently, you have the option to activate or suspend them through the convenient quick actions menu located in the process page header:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/activate_suspend_timer.gif)
More information about timer expressions you can find in the below section:
[Timer expressions](./timer-expressions)
## Configuration
For each node type, the following timer types can be configured:
| Node Type | Date | Duration | Cycle |
| ------------------------ | ---- | -------- | ----- |
| Timer Start Event | Yes | No | Yes |
| Timer Intermediate Event | Yes | Yes | No |
| Timer Boundary Event | Yes | Yes | No |
A process definition version should have a single Timer Start Event.
For comprehensive details on each timer event node in this section, please refer to the corresponding documentation:
# Timer expressions
When working with FlowX.AI components, there are multiple scenarios in which timer expressions are needed.
There are two timer expressions formats supported:
* [**Cron expressions**](#cron-expressions) - used to define the expiry date on processes
* [**ISO 8601**](#iso-8601) - used to define the duration of a response timeout or for a timer expression
### Cron expressions
A cron expression is a string made up of **six mandatory subexpressions (fields) that each specifies an aspect of the schedule** (for example, `* * * * * *`). These fields, separated by white space, can contain any of the allowed values with various combinations of the allowed characters for that field.
A field may be an asterisk (`*`), which always stands for “first-last”. For the day-of-the-month or day-of-the-week fields, a question mark (`?`) may be used instead of an asterisk.
Important: Only Spring cron expressions are permissible for configuration. Refer to the [**official documentation**](https://docs.spring.io/spring-framework/reference/integration/scheduling.html#scheduling-cron-expression) for detailed information on configuring Spring Cron expressions.
Subexpressions:
1. Seconds
2. Minutes
3. Hours
4. Day-of-Month
5. Month
6. Day-of-Week
7. Year (optional field)
An example of a complete cron-expression is the string `0 0 12 ? * FRI` - which means **every Friday at 12:00:00 PM**.
More details:
[Scheduling cron expressions](https://docs.spring.io/spring-framework/docs/current/reference/html/integration.html#scheduling-cron-expression)
#### Cron Expressions are used in the following example:
* [**Process definition**](../../../building-blocks/process/process-definition) - **Expiry time** - a user can set up a `expiryTime` function on a process, for example, a delay of 30s will be set up like:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/timer_process_settings.png)
### ISO 8601
ISO 8601 is an international standard covering the worldwide exchange and communication of date and time-related data. It can be used to standardize the following: dates, time of delay, time intervals, recurring time intervals, etc.
More details:
[ISO 8601 date format](https://www.digi.com/resources/documentation/digidocs/90001488-13/reference/r_iso_8601_date_format.htm)
[ISO 8601 duration format](https://www.digi.com/resources/documentation/digidocs//90001488-13/reference/r_iso_8601_duration_format.htm)
#### ISO 8601 format is used in the following examples:
* **Node config** - **Response Timeout** - can be triggered if, for example, a topic that you define and add in the **Data stream topics** tab does not respect the pattern
ISO 8601 dates and times:
| Format accepted | Value ranges |
| -------------------- | -------------------------------------------- |
| Year (Y) | YYYY, four-digit, abbreviatted to two-digit |
| Month (M) | MM, 01 to 12 |
| Week (W) | WW, 01 to 53 |
| Day (D) | D, day of the week, 1 to 7 |
| Hour (h) | hh, 00 to 23, 24:00:00 as the end time |
| Minute (m) | mm, 00 to 59 |
| Second (s) | ss, 00 to 59 |
| Decimal fraction (f) | Fractions of seconds, any degree of accuracy |
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/timer_response_timeout.png)
* [**Actions**](../../actions/actions) - **Timer expression** - it can be used if a delay is required on that action
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/timer_action_edit.png)
# Timer Intermediate Event (interrupting)
A Timer Intermediate Event (interrupting) is an event that is triggered based on a specified time duration or date. It is placed within the flow of a process and serves as a point of interruption and continuation.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/timer_intermediate_event.png#center)
## Configuring a timer intermediate event (interrupting)
| Field | Validations | Accepted Values |
| ---------- | ----------- | -------------------------------- |
| Definition | Mandatory | ISO 8601 formats (date/duration) |
| | | Process param |
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/intermediate_timer_eventz.png)
### Timer type:
#### Date
* event triggered on a specific date-time
* ISO 8601 format (example: `2019-10-01T12:00:00Z` - UTC time, `2019-10-02T08:09:40+02:00`- UTC plus two hours zone offset)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/intermediate_timer_date.png)
#### Duration
Event triggered after x duration after the token reached the timer node (or parent node) (example: `PT6S`).
* Definition:
* ISO
* Cron
* Process param
## General Rules
* A Timer Intermediate Event is triggered based on its duration or date definition.
* When the token enters a Timer Intermediate Event, a scheduler is set, and it waits for the timer event to be triggered.
* After the timer is triggered, the process instance continues.
# Timer start event (interrupting)
A Timer Start Event initiates a process instance based on a specified time or schedule.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/timer_start_interrupting.png#center)
Please note that a process definition version can accommodate only one Timer Start Event.
If a process definition contains versions with Start Timer Event nodes, only for the published version will generate a scheduler.
## Configuration
Depending on the node type, the following timer types can be configured:
| Node Type | Date | Duration | Cycle |
| ----------------- | ---- | -------- | ----- |
| Timer Start Event | Yes | No | Yes |
Starting a process via registered timers requires sending a process start message to Kafka, necessitating a service account and authentication. For detailed guidance, refer to:
[**Service Accounts**](../../../../setup-guides/access-management/configuring-an-iam-solution#scheduler-service-account)
### Timer type values
* Date
* Cycle
#### Date
Specifies an exact date and time for triggering the event. You can use ISO 8601 date format for accurate date-time representation.
#### Scenario: employee onboarding reminder
In this scenario, the Timer Start Event is used to trigger an employee onboarding process at a specific date and time.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/employee_onboarding_reminder.png)
* Start Event (Timer Start Event) - New Hire Start Date
* Timer Definition: 2023-09-01T09:00:00Z (ISO 8601 format) → This means the process will initiate automatically at the specified date and time.
* This event serves as the trigger for the entire process.
* Transition → Employee Onboarding Notification
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/start_timer_date.png)
* Employee Onboarding Notification
* Notify new employee about onboarding requirements by sending an email notification with a template called "Important Onboarding Information"
* Actions: The HR team or automated system sends out necessary email information/documents, and instructions to the new employee.
* After the notification is sent, the process transitions to the Complete Onboarding node.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/onboarding_notification.png)
* Complete Onboarding
* Employee onboarding completed
* At this point, the employee's onboarding process is considered complete.
* Actions: The employee may have completed required tasks, paperwork, or orientation sessions.
#### Cycle
Specifies a repeating interval for triggering the event. The cycle can be defined using ISO 8601 repeating intervals or cron expressions.
### Configuration according to timer type
For each timer type, the following values can be configured:
| Field | Validations | Accepted Values |
| ----------------- | ------------------ | ---------------------------------- |
| Definition | Mandatory | - Process param |
| | | - ISO 8601 formats (date/duration) |
| | | - Cron expressions (cycle) |
| Start Time | Only for Cycle | - ISO 8601 format (date-time) |
| | | - Process param |
| End Time | Only for Cycle | - ISO 8601 format (date-time) |
| | | - Process param |
| Active/ Suspended | Default: Suspended | - Active |
| | | - Suspended |
The Start Timer Event supports either ISO 8601 formats or spring cron expressions for defining timer values.
### General rules
* A process definition version can have a single published version, which can be a committed or a WIP version.
* Only the published version generates a scheduler when it contains Start Timer Event nodes.
* When a new committed version is published or when a WIP published version is updated with new Start Timer Event settings:
* The scheduler is updated based on the settings in the published version.
* The scheduler state (active or suspended) remains the same as before.
# User task node
This node represents an interaction with the user. It is used to display a piece of UI (defined in the UI Designer or a custom Angular component. You can also define actions available for the users to interact with the process.
## Configuring a user task node
![User Task Node](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/user_task_node.png#center)
User task nodes allow you to define and configure UI templates and possible [actions](../actions/actions) for a certain template config node (ex: [button components](../ui-designer/ui-component-types/buttons)).
#### General Config
* **Node name** - the name of the node
* **Can go back** - setting this to true will allow users to return to this step after completing it. When encountering a step with `canGoBack` false, all steps found behind it will become unavailable.
* [**Stage** ](../../platform-deep-dive/plugins/custom-plugins/task-management/using-stages)- assign a stage to the node
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/node_config_new.png)
When encountering a step with `canGoBack` switched to false, all steps found behind it will become unavailable.
#### Data stream topics
* **Topic Name** - the topic name where the process engine listens for the response (this should be added to the platform and match the topic naming rule for the engine to listen to it) - available for UPDATES topics (Kafka receive events)
A naming pattern must be defined on the [process engine configuration](../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka) to use the defined topics. It is important to know that all the events that start with a configured pattern will be consumed by the Engine. For example, `KAFKA_TOPIC_PATTERN` is the topic name pattern where the Engine listens for incoming Kafka events.
* **Key Name** - will hold the result received from the external system, if the key already exists in the process values, it will be overwritten
#### Task Management
* **Update task management** - force [Task Management](../../platform-deep-dive/plugins/custom-plugins/task-management/) plugin to update information about this process after this node
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/user_task_node_task_mngmnt.png)
## Configuring the UI
The **FlowX Designer** includes an intuitive [UI Designer](../ui-designer/ui-designer) (drag-and-drop editor) for creating diverse UI templates. You can use various elements from basic [buttons](../ui-designer/ui-component-types/buttons), indicators, and [forms](../ui-designer/ui-component-types/form-elements/), but also predefined [collections](../ui-designer/ui-component-types/collection/collection) or [prototypes](../ui-designer/ui-component-types/collection/collection_prototype).
### Accessing the UI Designer
To access the **UI Designer**, follow the next steps:
1. Open **FLOWX Designer** and from the **Processes** tab select **Definitions**.
2. Select a **process** from the process definitions list.
3. Click the **Edit** **process** button.
4. Select a **user task** **node** from the Pro dcess Designer then click the **brush** icon to open the **UI Designer**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/access_ui_designer.gif)
### Predefined components
UI can be defined using the available components provided by FLOWX, using the UI Designer available at node level.
Predefined components can be split in 3 categories:
These elements are used to group different types of components, each having a different purpose:
* [**Card**](../ui-designer/ui-component-types/root-components/card) - used to group and configure the layout for multiple **form elements.**
* [**Container**](../ui-designer/ui-component-types/root-components/container) - used to group and configure the layout for multiple **components** of any type.
* [**Custom**](../ui-designer/ui-component-types/root-components/custom) - these are Angular components developed in the container application and passed to the SDK at runtime, identified here by the component name
More details in the following section:
The root component can hold a hierarchical component structure.
Available children for **Card** and **Container** are:
* **Container** - used to group and align its children
* **Form** - used to group and align form field elements (**inputs**, **radios**, **checkboxes**, etc)
* **Image** - allows you to configure an image in the document
* **Text** - a simple text can be configured via this component, basic configuration is available
* **Hint** - multiple types of hints can be configured via this component
* **Link** - used to configure a hyperlink that opens in a new tab
* **Button** - Multiple options are available for configuration, the most important part being the possibility to add actions
* **File Upload** - A specific type of button that allows you to select a file
More details in the following section:
This type of elements are used to allow the user to input data, and can be added only in a **Form** Component. They have have multiple properties that can be managed.
1. [**Input**](../ui-designer/ui-component-types/form-elements/input-form-field) - FLOWX form element that allows you to generate an input form filed
2. [**Select**](../ui-designer/ui-component-types/form-elements/select-form-field) - to add a dropdown
3. [**Checkbox**](../ui-designer/ui-component-types/form-elements/checkbox-form-field) - the user can select zero or more input from a set of options
4. [**Radio**](../ui-designer/ui-component-types/form-elements/radio-form-field) - the user is required to select one and only one input from a set of options
5. [**Datepicker**](../ui-designer/ui-component-types/form-elements/datepicker-form-field) - to select a date from a calendar picker
6. [**Switch**](../ui-designer/ui-component-types/form-elements/switch-form-field) - allows the user to toggle an option on or off
More details in the following section:
### Custom components
These are components developed in the web application and referenced here by component identifier. This will dictate where the component is displayed in the component hierarchy and what actions are available for the component.
To add a custom component in the template config tree, we need to know its unique identifier and the data it should receive from the process model.
More details in the following section:
## Displaying a UI Element in FlowX Designer
When a process instance is started the web application will receive all the UI elements that can be displayed in that process.
1. Starting a Process:
* The process is initiated by sending a request to the SDK.
* This tells the server to start the specific process definition, in this case, named "DemoProcess".
```json
...
{
"processDefinitionName" : "DemoProcess",
"tokens" : [ {
"id" : 1961367,
"startNodeId" : null,
"embedNodeId" : null,
"mainSwimlaneId" : null,
"currentProcessVersionId" : 861684,
"currentContext" : "main",
"initiatorType" : null,
"initiatorId" : null,
"currentNodeId" : 921777,
"currentNodeName" : null,
"state" : "ACTIVE",
"statusCurrentNode" : "ARRIVED",
"dateUpdated" : "2024-09-23T09:29:06.668355Z",
"uuid" : "60ecb3a1-0073-4d98-86b8-263bdaa95a8b"
} ],
"state" : "STARTED"
...
}
```
2. Displaying UI Elements:
* As the process progresses, it reaches different nodes, in our example, "User Tasks" (nodes that require human interaction).
* When a User Task is reached, a server message is sent to the application, instructing it to display the UI element associated with that users task. This UI element is essentially the part of the interface that the user will interact with at this point in the process.
* Inside the `templateConfig` you can find all the UI elements configured on the node. In the following example you can see the details of a `CARD` UI element.
```json
...
"templateConfig" : [ {
"id" : 781824,
"flowxUuid" : "771f7a69-2858-4ef4-8f58-a052c0cfe724",
"componentIdentifier" : "CARD",
"type" : "FLOWX",
"order" : 1,
"canGoBack" : true,
"displayOptions" : {
"flowxProps" : {
"title" : "Company representative",
"hasAccordion" : false
},
"style" : {
"widthType" : "fixed",
"layoutType" : "grid",
"flexLayout" : {
"fxLayout" : "row wrap",
"fxLayoutAlign" : "start start",
"fxLayoutGap" : 10
},
"gridLayout" : {
"columns" : 3,
"position" : "start start",
"columnGap" : 8,
"rowGap" : 8
},
"gridChild" : {
"colSpan" : 1,
"rowSpan" : 1,
"order" : 0
},
"heightType" : "auto",
"width" : 950
},
"className" : null,
"platform" : "web"
}
} ]
...
```
3. Matching UI Elements to User Tasks:
* When the application receives an update ProgressUpdateDto will trigger the SDK to search for the UI element having the same `nodeId` with the one in the SSE event.
Start process response:
```json
...
"navigationAreaId" : 86501151,
"nodeDefinitionId" : 921779,
"context" : "main"
...
```
SSE event:
```json
"{\"progressUpdateDTO\":{\"processInstanceUuid\":\"271f00b9-4344-4a82-8479-378be92a5377\",\"tokenUuid\":\"7fd79711-ec92-425e-8b5d-8bb6c6362095\",\"currentNodeId\":921779,\"currentContext\":\"main\"}}"
```
4. Fetching Data and Actions:
* To display the UI element correctly, the application may need additional data or perform certain actions.
* It retrieves this data via a request to the server. This step ensures that the displayed UI element has all the information and functionality it needs to work properly, like updating the displayed information or enabling specific actions for the user.
In simple terms, the process involves starting a sequence, notifying the application when to show specific user interface parts, and ensuring these parts have the necessary data to function correctly.
# Data model
The Data Model is a centralized configuration feature that enables efficient management of key-value attributes inside process definitions. It supports multiple attribute types, such as strings, numbers, booleans, objects, arrays, and enums, offering users the ability to define, update, delete, and apply data attributes seamlessly.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/data_model_new.png)
### Attribute types
The Data Model supports the following attribute types:
* STRING
* NUMBER
* CURRENCY
* BOOLEAN
* OBJECT
* ARRAY
* ARRAY OF STRINGS
* ARRAY OF NUMBERS
* ARRAY OF BOOLEANS
* ARRAY OF OBJECTS
* ARRAY OF ENUMS
* ENUM
* DATE
#### Currency attribute
Currencies are managed using an object structure that ensures accurate representation and localization.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-18%20at%2010.36.00.png)
* **Currency Object Structure**:
* Includes `amount` (numerical value) and `code` (ISO 4217 currency code, e.g., USD, EUR).
* Example:
```json
{
"amount": 12000.78,
"code": "USD"
}
```
* **Regional Formatting**:
* Currency values adapt to regional conventions for grouping, decimals, and symbol placement. For instance:
* **en-US (United States)**: `$12,000.78` (symbol before the value, comma for grouping, dot for decimals).
* **ro-RO (Romania)**: `12.000,78 RON` (dot for grouping, comma for decimals, code appended).
* **Fallback Behavior**: If the `code` is null, the system defaults to the locale's predefined currency settings.
* **UI Integration**:
* Currency input fields dynamically format values based on locale settings and save the `amount` and `code` into the data store.
* Sliders and other components follow the same behavior, formatting values and labels according to the locale.
Check this section for more details about l10n & i18n
#### Date attribute
Dates are represented in ISO 8601 format and dynamically formatted based on locale and application settings.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-18%20at%2010.44.33.png)
* **Locale-Specific Date Formats**: FlowX dynamically applies regional date formatting rules based on the locale. For instance:
* **en-US (United States)**: `MM/DD/YYYY` → `09/28/2024`
* **fr-FR (France)**: `DD/MM/YYYY` → `28/09/2024`
* **Customizable Formats**: You can choose from predefined formats (e.g., short, medium, long, full) or define custom formats at both application and UI Designer levels.
* **Timezone Handling**:
* **Standard Date**: Adjusts to the current timezone.
* **Date Agnostic**: Ignores time zones, using GMT for consistent representation.
* **ISO 8601 Compliance**: Ensures compatibility with international standards.
Check this section for more details about l10n & i18n
#### Number attribute
The **Number** attribute type supports two subtypes: **integers** and **floating point numbers**, providing flexibility to represent whole numbers or decimal values as required.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-18%20at%2010.41.25.png)
* **Subtypes**
* **Integer**: Represents whole numbers without any fractional or decimal part.
* Example: `1, 42, 1000`
* **Floating Point**: Represents numbers with a decimal point, enabling precise storage and representation of fractional values.
* Example: `3.14, 0.01, -123.456`
* **Locale-Specific Formatting**:
* Numbers adapt to regional conventions for decimal separators and digit grouping. For example:
* **en-US (United States)**: `1,234.56` (comma for grouping, dot for decimals)
* **de-DE (Germany)**: `1.234,56` (dot for grouping, comma for decimals)
* **fr-FR (France)**: `1 234,56` (space for grouping, comma for decimals)
* **Precision Settings**:
* **Minimum Decimals**: Ensures a minimum number of decimal places are displayed, adding trailing zeros if necessary.
* **Maximum Decimals**: Limits the number of decimal places stored, rounding values to the defined precision.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-18%20at%2010.42.16.png)
These settings can be overriden at the application level or in the **UI Designer** for specific components.
* **Validation**:
* Enforce range constraints (e.g., minimum and maximum values).
* Input fields automatically apply formatting masks to prevent invalid data entry.
Check this section for more details about l10n & i18n
### Creatinga a data model
In the Data Model, you can add new key-pair values, allowing seamless integration with the UI Designer. This functionality enables quick shortcuts for adding new keys without switching between menus
Example:
### Data model reference
The "View References" feature allows you to see where specific attributes are used within the data model. This includes:
* **Process Keys**: Lists all process keys linked to an attribute.
* **UI Elements**: Displays references such as the element label, node name, and UI Element key.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/2024-11-18%2010.18.08.gif)
### Sensitive data
Protect sensitive information by flagging specific keys in the data model. This ensures data is hidden from process details and browser console outputs.
### Reporting
The **Use in Reporting** tag allows you to designate keys for use in the reporting plugin, enabling efficient tracking and analysis.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/dm_reporting.png)
# Navigation areas
Navigation areas play a pivotal role in user interface design. They enhance the user experience by providing a structured, organized, and efficient way for users to interact with and explore various features and solutions.
In the navigation panel, the navigation hierarchy should be displayed beneath platform tabs, which include options for both web and mobile platforms. The default tab selected upon opening should be the web platform.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/nav_ares.gif)
## Adding new areas
To create a new navigation area, follow these steps:
1. Open **FlowX Designer**.
2. Either open an existing process definition or create a new one.
3. Toggle the navigation areas menu by clicking on **Navigation view**.
4. Select **Create New Area**.
Navigation configurations made on one platform (for example, Web) are not automatically duplicated to the other platform by default. You must copy or recreate them.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/creating_nav_areas.gif)
## Interacting with areas
Once you've added a new navigation area, you can interact with it through the breadcrumbs menu, which provides the following options:
Use **Area Settings** to configure the following properties:
* **Name** - a name for the navigation area element that is visible in the **Navigation menu**
This does not represent the label for your navigation area (for example, for a step element) that would be visible in the process at runtime. To set the label for your element, use the UI Designer to edit it.
To do that, trigger the **Navigation View** menu, then navigate to your element and click on the breadcrumbs. Afterward, click "UI Designer."
* **Alternative Flows** - There might be cases when you want to include or exclude process nodes based on some information that is available as a start condition.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/ui_designer_navigation_areas.gif)
Use UI Designer to configure the following:
* **Settings**:
* **Generic**: Properties available cross-platform (Web, Android and iOS), available for all platforms
* **Platform-specific configuration and styling**: For components across Web, iOS, and Android.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/copy_paste_areas.gif)
The "Copy Navigation Areas" feature allows you to replicate navigation hierarchies and elements between different platforms.
The Copy/Paste Navigation Areas feature facilitates the duplication of navigation configurations within the same process definition and environment. It does not support copying across different process definitions or environments.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/delete_area.gif)
To delete a navigation area:
1. Access the **Navigation view** menu within the FlowX Designer.
2. Choose the intended navigation area, then click on the breadcrumbs menu located on the right.
3. Select **Delete Area**.
Note: Deleting the navigation area will also remove any associated child areas. Additionally, any user tasks contained within will be unassigned and relocated to the User Tasks section within the **Navigation view**.
## Navigation areas types
You can create the following types of navigation areas:
A stepper guides users through a sequence of logical and numbered steps, aiding in navigation.
A user interface element presenting multiple tabs for navigation or organizing content. It allows users to switch between different sections or views within the application.
This type displays basic full-page content for an immersive user experience.
A modal temporarily takes control of the user's interaction within a process as an overlay, requiring user interaction before returning to the main application or website functionality.
A container-like element grouping specific navigation areas or user tasks, commonly used for organizing content such as headers and footers within a process definition.
Allows subprocess design under parent process hierarchy; ensures validation and design restrictions.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/navigation_types.gif)
### Stepper
A stepper breaks down progress into logical and numbered steps, making navigation intuitive.
You can also define a stepper in step structure with the possibility to track progress and also returning to a previously visited step (as you can see in the navigation areas in the example above).
#### Stepper in step example
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/stepper_in_steps.gif)
#### Steps
Steps are individual elements within a stepper, simplifying complex processes.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/steps_ui_t.png)
### Tab bar
The Tab Bar is a crucial component in user interfaces, facilitating seamless navigation and content organization.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/tabs_view.gif)
Key Features and usage guidelines:
The navigation Tab Bar offers specialized support for **parallel zones** within the same **user task**. This feature allows users to navigate effortlessly between different sections or functionalities.
Users can access multiple tabs within the tab bar, enabling quick switching between various views or tasks.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/tab_bar_nav_area.png)
When setting up a tab bar zone in your BPMN diagram, ensure that you always initiate a parallel zone using **Parallel Gateway** nodes. This involves using one Parallel Gateway node to open the zone and another to close it, as demonstrated in the example above. This approach ensures clarity and consistency in your diagram structure, facilitating better understanding.
**Mobile development considerations**: when developing for mobile devices, ensure the tab bar is **always** positioned as a **root component** in the navigation. It should remain consistently visible, prioritizing ease of navigation.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/root_navigation_tab.png)
#### Tabs
Tabs are clickable navigation elements within the Tab Bar that allow users to switch between different sections or functionalities within an application.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/tabs_parallel.png)
Each tab could contain a user task or multiple user tasks that can be rendered in parallel.
You can also use [**Start embedded subprocess**](../node/call-subprocess-tasks/start-embedded-subprocess) nodes to render defined subprocesses as a tab. Check the [**Start embedded subprocess**](../node/call-subprocess-tasks/start-embedded-subprocess) for more details.
#### Tab bars inside tabs
In addition to regular tab bars, you have the option to implement nested tab bars, allowing for the display of another set of tabs when accessing a specific tab.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/nested_tab_bars.png)
To achieve this configuration, you'll need to create an additional parallel zone inside the previously created one for the main tab bar. This section should encompass the child tabs of the primary tab bar in your diagram. Once these child tabs are defined, close the parallel zone accordingly.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/sub_tabs.png)
### Page
Page navigation involves user interaction, typically through clicking on links, buttons, or other interactive elements, to transition from one page to another.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/page_smth.gif)
A page could contain multiple user tasks displayed either in parallel (single page application) or one by one (wizard style).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/page_overview.png)
#### Navigation type (only for Web)
This property is available starting with **v4.1.0** platform release.
You have the possibility to add step-by-step or wizard-style navigation within a page (applicable when a page contains more than one User Task). This means users can navigate through the application in a guided manner, accessing each step within the designated area and page.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/page_navigation_type.png)
* **Single page form** (default): The Web Renderer will display all User Tasks within the same page (in parallel).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/page_single_page.png)
For optimal navigation, we suggest utilizing cards to guide users through the content.
To maintain a clean UI while displaying user tasks on a single page, use cards as the root UI elements and apply accordions to them. This allows you to collapse each card after it is validated by an action.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/cards_single_page_coll.gif)
Child areas will be rendered on the same page.
* **Wizard**: For the Wizard option, the Web Renderer will display one user task at a time, allowing navigation using custom Next and Back buttons.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/page_wizard.gif)
Child areas will be presented sequentially. It's crucial to configure actions properly to ensure smooth user navigation.
### Modal
Modals offer temporary control over user interaction within a process, typically appearing as pop-ups or overlays. They guide users through specific tasks before returning to the main functionality of the application or website.
![Modal Example](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/modal.gif)
To enable user interaction with a modal, you can configure it to be dismissable in two ways:
* Dismiss on backdrop click
* Display dismiss confirmation alert
Here's how to configure these options:
1. Navigate to your configured modal and access the UI Designer.
![Modal UI Designer](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/modal_ui_designer.gif)
2. In the UI Designer navigation panel, select the **Settings** tab, then choose **Generic**.
3. Check the box labeled "dismissable." If you also want to display a dismiss confirmation alert, configure the:
* Title
* Message
* Confirm button label
* Cancel button label
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/modal_configuration.png)
By configuring these options, you can customize the behavior of your modals to enhance user experience and guide them effectively through tasks.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/modal_example.gif)
### Zone
A Zone serves as a container designed to govern UI navigation by grouping specific elements together. Its optimal application is in scenarios involving processes featuring both a header and footer.
#### Navigation type (only for Web)
You have the possibility to add step-by-step or wizard-style navigation within a specific zone (applicable when a zone contains more than one User Task). This means users can navigate through the application in a guided manner, accessing each step within the designated area and page.
* **Single page form** (default): The Web Renderer will display all User Tasks within the same zone.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/single_page_form.png)
For optimal navigation, we suggest utilizing cards to guide users through the content.
Child areas will be rendered on the same page.
* **Wizard**: For the Wizard option, the Web Renderer will display one user task at a time, allowing navigation using custom Next and Back buttons.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/wizard.gif)
Child areas will be presented sequentially. It's crucial to configure actions properly to ensure smooth user navigation.
Zones with headers and footers are exclusively accessible in web configurations. They are not supported as navigation areas for mobile development.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/header_footer.png)
#### How to Create a Header/Footer Zone
To establish a header/footer zone, follow these steps:
1. Begin by opening a new parallel zone and ensure to close it where you want the header/footer zone to end.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/header%3Afooter_parallel.gif)
2. Introduce two new user tasks nodes within your process, designated to function as the header, respectively as footer.
3. Connect the first parallel gateway node to both of them.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/connect_header_footer.png)
4. Now connect the header and footer user tasks to the closing Parallel Gateway.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/connect_with_closing.gif)
3. In the navigation areas menu, incorporate a new zone labeled "Header Zone" and a new zone "Footer Zone".
4. Position the header at the top of your navigation structure or at the top within your primary navigation element and the footer at the bottom.
5. Assign the user tasks to the "Header Zone" and to the "Footer Zone".
When working with containers directly within navigation zones, you have the flexibility to set the sticky/static property directly on your specific navigation zone using the UI Designer, without having to add specific user tasks. However, determining the best practice depends on the specific use case you're aiming to implement.
For instance, if you're looking to incorporate footers containing actions or buttons such as "Cancel application" or "Continue," it's advisable to include user tasks, allowing you to configure node actions effectively.
On the other hand, if your goal is to integrate static headers and footers featuring branding assets and external URLs, it's simpler to directly add them to the navigation areas. In this scenario, you can effortlessly incorporate images or text with external URLs on your containers styling.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/header_footer_blaa.png)
6. Proceed to customize the user tasks UI according to your preferences.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/header_footer_custom.gif)
#### Styling options
To customize the appearance of headers and footers, you need to utilize containers as the root UI elements for **user tasks** or navigation areas.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/container_ui.png)
You have two styling options available for these containers:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/sticky_configuration.gif)
* **Static**: This style remains fixed and does not scroll along with the page content.
* **Sticky**: When the sticky property is enabled, the container maintains its position even during scrolling.
* **Layout**: You have the option to specify minimum distances between the container and its parent element while scrolling. At runtime, sticky containers will keep their position on scroll relative to top/ bottom/ right/ left margin of the parent element
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/Screenshot%202023-12-13%20at%2017.18.39.png)
In mobile configurations, the right and left properties for sticky layout are ignored by the mobile renderer.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/sticky_demo.gif)
## Navigation areas demo
For more information, you can also check the following demo on our Academy website:
## Alternative flows
There might be cases when you want to include or exclude process nodes based on some information that is available at start.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/alternative_flows.png)
For example, in case of a bank onboarding process, you might want a few extra BPMN nodes in the process in case the person trying to onboard is a minor.
For these cases, we have added the possibility of defining **alternative flows**.
For each navigation area or node, you can choose to set if they are only available in certain cases. In the example below, we will create two alternative flows in which we will include two different steppers and one modal without being part of an alternative flow.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/adding_alternative_flows.gif)
When starting a process, you must mention for the which flow the process will be initiated using the "navigationFlow" parameter and the name of the alternative flow:
```json
{"navigationFlow": "First Flow"}
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/alternative_flows_ex.gif)
If a node is not assigned to an alternative flow, this means the node will be included in all possible flows.
A node could also be a part of multiple flow names.
When configuring alternative flows in the main process, remember that they will also be applied in any embedded subprocesses with identical names.
# Process Designer
The Process Designer workspace is tailored for creating and editing business processes, featuring a menu with all the essential elements needed to design a process.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/process_designer_411.png)
When you encounter a process definition menu, it encompasses the following essential elements:
Awesome Process: This serves as the name of the process definition, providing a clear identifier for the workflow.
This section displays both the version branch name and the current state of the published version. It offers insights into the active development state of the workflow.
When selected, this icon opens up additional options to enhance visibility and control over the various process definitions and their branches.
To commit alterations to the workflow, you can employ this designated action found within the version menu. Triggering the submission action prompts the appearance of a modal window, inviting you to provide a commit message for context.
This reassuring notification indicates that any modifications made to the workflow have been automatically saved, eliminating the need for manual user intervention. It ensures the safety of your work.
Utilize this icon to switch the current work mode from "Edit mode" to "Readonly." It empowers you to control the accessibility and editing permissions for the workflow.
The Misconfigurations Warnings represent a proactive alert system that ensures process configurations align with selected platforms. Dynamic platform-specific settings provide users with alerts guiding optimal navigation and UI design, integrated into the frontend interface to empower informed decision-making and enhance process configuration.
In the Node details tab, you can set the configuration details for a node.
We have designed FlowX.AI components to closely resemble their BPMN counterparts for ease of use.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/smth_nodes.png)
Check the following section for more details about BMPN nodes and how to use them:
In the following sections, we will provide more details on how to use the Process Designer and its components.
## Process definition
The process is the core building block of the platform. Think of it as a representation of your business use case, for example making a request for a new credit card, placing an online food order, registering your new car or creating an online fundraiser supporting your cause.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/prs_def.png)
A process is nothing more than a series of steps that need to be followed in order to get to the desired outcome: getting a new credit card, gathering online donations from friends or having your lunch brought to you. These steps involve a series of actions, whether automated or handled by a human, that allows you to complete your chosen business objective.
## Process instance
Once the desired processes are defined in the platform, they are ready to be used. Each time a process needs to be used, for example, each time a customer wants to request a new credit card, a new instance of the specified process definition is started in the platform. Think of the process definition as a blueprint for a house, and of the process instance as each house of that type that is being built.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/prs_instance.png)
From this point on, the platform takes care of following the process flow, handling the necessary actions, storing and interpreting the resulting data.
# Process definition
The core of the platform is the process definition, which is the blueprint of the business process made up of nodes that are linked by sequences.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/process_definitions.png)
Once a process is defined and set as published on the platform, it can be executed, monitored, and optimized. When a business process is started, a new instance of the definition is created.
## History
In the **History** tab, you will find a record of all the modifications and events that have occurred in the process.
* **Versions** - provides information on who edited the process, when it was modified, and the version number and status
* **Audit log** - provides a detailed record of events and changes
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/process/process_audit.gif)
### Versions
In the **Versions** tab you will find the following details:
* Last edited on - the last time when the process was modified
* Last edited by - the last person who modified a process
* Version - version number
* Status - can be either **Published** or **Draft**
* View process - clicking on the eye icon will redirect you to the process definition
More details available in the following section:
### Audit log
In the **Audit log** tab you will find the following items:
* Timestamp
* User
* Subject
* Event
* Subject Identifier
* Version
* Status
Some items in the Audit log are filterable, making it easy to track changes in the process.
## Data model
In the Data Model, you can add new key-pair values, which enables you to use shortcuts when adding new keys using the UI Designer, without having to switch back and forth between menus.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/add_new_data_model.png)
For more information on how to work with the Data Model, explore the following section:
## Swimlanes
Swimlanes offer a useful method of organizing process nodes based on process participants. By utilizing swimlanes, you can establish controlled access to specific process nodes for particular user roles.
#### Adding new swimlanes
To add new swimlanes, please follow these steps:
1. Access the **FlowX.AI Designer**.
2. Open an existing process definition or create a new one.
3. Identify the default swimlane and select it to display the contextual menu.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/process/add_new_swimlane.png)
With the contextual menu, you can easily perform various actions related to swimlanes, such as adding or removing swimlanes or reordering them.
4. Choose the desired location for the new swimlane, either below or above the default swimlane.
5. Locate and click the **add swimlane icon** to create the new swimlane.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/swimlanes_docs.gif)
For more details about access management, check the following sections:
## Settings
### Process Name
* **Process definition name**: Edit the process definition name.
Name can only contain letters, numbers and the following special characters: \[] () . \_ -
### General
In the General settings, you can edit the process definition name, include the process in reporting, set general data, and configure expiry time expression using Cron Expressions and ISO 8601 formatting.
* **Available Platforms** (Single Choice): This enable configuration (Navigation Areas, UI Designer, Misconfigurations) only for the specific platform selected:
* **Web Only**: If the navigation areas are defined exclusively for the Web, the process should remain set to Web. This is because it is optimized solely for web usage and would not provide the same level of functionality or user experience on mobile devices
* **Mobile Only**: If the navigation areas are defined only for Mobile, the process should be set to Mobile. This ensures that the process leverages mobile-specific features and design considerations, providing an optimal experience on mobile devices.
* **Omnichannel**: If the existing process has navigation areas defined for both Web and Mobile platforms, it should be set to Omnichannel. This ensures that users have a consistent experience regardless of the platform they are using.
This setting is available starting with **v4.1.0** platform release.
Navigation Areas, UI Designer and misconfigurations will be affected by this setting.
By default, new processes are set to **Web Only**. This ensures that they are initially optimized for web-based usage, providing a starting point for most users and scenarios.
* **Use process in reporting**: When enabled, this setting includes the process in reporting.
* **Use process in task management**: Enabling this option creates tasks that are displayed in the Task Manager plugin. For more details, refer to the [**Task Management**](../../platform-deep-dive/plugins/custom-plugins/task-management/task-management-overview) section.
* **General data**: Refers to customizable data that can be both set and received in a response context.
* **Expiry time**: A user can set up an expiryTime expression on a process definition to specify an expiry date.
| **Example** | **Expression** | **Explanation** |
| ------------------------- | ---------------------- | ----------------------------------------------------------------------------------------------------------------------- |
| Daily Expiry at Midnight | `0 0 0 * * ?` | Sets the process to expire at 00:00 (midnight) every day. The `?` is used in the day-of-week field to ignore its value. |
| Hourly Expiry on Weekdays | `0 0 9-17 * * MON-FRI` | Sets the process to expire at the start of each hour between 9 AM and 5 PM on weekdays (Monday to Friday). |
| Expiry After a Duration | `PT3M22S` | Sets the process to expire after a duration of 3 minutes and 22 seconds from the start, using **ISO 8601 format**. |
FlowX support Spring cron expressions. They **must always include six fields**: `seconds`, `minutes`, `hours`, `day of month`, `month`, and `day of week`.\
This differs from traditional UNIX cron, which uses only five fields. Be sure to include the `seconds` field in Spring cron expressions to avoid errors.
The cron expression format should include seconds (0), minutes (0), hours (0), and then wildcards for the day, month, and day of the week fields. The `?` sign in the day-of-week field is used when the day-of-month field is already specified (or ignored in this case).
You can use both ISO 8601 duration format (`PT3M22S`) and cron expressions (`0 0 0 * * ?`, `0 0 9-17 * * MON-FRI`) to define `expiryTime` expressions for process definitions.
For more information about **Cron Expressions** and **ISO 8601** formatting, check the following section:
[Timer Expressions](../node/timer-events/timer-expressions)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/proc_general_settings.png)
### Permissions
After defining roles in the identity provider solution, they will be available to be used in the process definition settings panel for configuring swimlane access.
When you create a new swimlane, it comes with two default permissions assigned based on a specific role: execute and self-assign. Other permissions can be added manually, depending on the needs of the user.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/process_permissions.png)
### Task Management
The Task Management plugin offers a business-oriented view of the process you defined in the Designer and allows for interactions at the assignment level. It also includes a generic parameter pointing to the application URL where the FlowX.AI process is loaded and uses process keys to search data stored in the process.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/process/process_task_mngmnt.png)
# Subprocess management
Learn how to configure, start, and execute subprocesses efficiently within your process flow.
Subprocesses are smaller process flows triggered by actions in the main process. They can inherit some process parameter values from the parent process and communicate with front-end apps using the same connection details as their parent process.
## Overview
Subprocesses can be started in two modes:
* **Asynchronous**: Execute alongside the parent process without delaying it.
* **Synchronous**: The parent process waits until subprocesses are finished before advancing.
## Configuring & starting subprocesses
Design subprocesses within the FlowX Designer, mirroring the main process structure.
* Within user tasks or task nodes.
* Custom node type in the main process.
* Using the corresponding node.
### Parameter inheritance
Available for **Start Subprocess** action and **Call Activity** node.
By default, subprocesses inherit all parent process parameter values. Configure inheritance by:
* **Copy from Current State**: Select keys to copy.
* **Exclude from Current State**: Specify keys to ignore.
Sub-processes can also have an [**Append Params to Parent Process**](../actions/append-params-to-parent-process) action configured inside their process definitions which will append their results to the parent process parameter values.
## Executing subprocesses
Define subprocess execution mode:
* **Asynchronous/Synchronous**: Set the `startedAsync` parameter accordingly.
In synchronous mode, subprocesses notify completion to the parent process. The parent process then resumes its flow and handles subprocess data.
## Additional resources
For detailed guidance on configuring and running subprocesses:
# Versioning
With versioning, you can easily track your process definition's evolution.
Here is a quick walkthrough video:
## Process Definitions list
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/versioning.png)
1. **Overview:**
* The tab provides a summary of all accessible process definitions in the current environment.
2. **Streamlined Data Display:**
* Information displayed in the list is retrieved from both the published version and work in progress.
3. **Key Information Included:**
* View details like process definition name, published version branch name, and published version state with the following convention:
* Work in progress - dotted
* Submitted - full
* Main branch - blue
* Secondary branch - yellow
4. **Actions:**
* Interact with each process definition through actions such as opening the BPMN tab in edit mode, starting instances, and displaying branching options.
* Contextual menu actions offer options to edit, open settings, view data modal, and delete process definitions.
## Branching and committing
"Branching Modal" feature provides more visibility and control over the process definitions. The process definition header includes details about the current version, such as its state (work in progress or submitted changes) and branch name.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/process_definition_header.png)
A "publish icon" will be displayed if the current version is set as published. You can access the branching modal using a button, and it can also be closed from the right corner close button.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/branching_modal.gif)
### Starting a new version (work-in-progress)
The Work-in-Progress (WIP) Versioning feature enhances the version control capabilities by allowing you to manage ongoing updates without interfering with already submitted versions.
You can initiate a new work-in-progress version while keeping the submitted version intact. A work-in-progress version is automatically created under the following circumstances:
* **New Process Definition**: When you create a new process definition, a corresponding work-in-progress version is initiated. This ensures that ongoing changes are tracked separately from the submitted version.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/wip_process_definition.png)
* **New Branch Creation**: The creation of a new branch in the system also triggers the creation of a work-in-progress version. This streamlines the process of branching and development, allowing for parallel progress without impacting the main submitted version.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/wip_new_branch.png)
* **Manual WIP Version Creation**: Users also have the flexibility to initiate a new work-in-progress version manually. This is particularly useful when building upon the latest version available on a branch.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/manual_wip.gif)
### Submitting changes
You can submit changes exclusively on work-in-progress (WIP) versions. Changes can be submitted using the designated action within the version menu. Upon triggering the submission action, a modal window appears, prompting you to provide a commit message.
A string of maximum 50 characters, mandatory for submission. Only letters, numbers, and characters \[] () . \_ - / are allowed.
The placeholder indicating work-in-progress is replaced with a "submitted" state within the graph view.
#### Updating submit messages
You have the flexibility to modify submit messages after changes are submitted. This can be accomplished using the action available in the version menu.
### Creating a new branch
Using versioning you can work on a stable copy of the process definition, isolated from ongoing updates by other users. You can create a new branch starting from a specific submit point.
The initiation of new branches is achieved using the dedicated action located in the left menu of the chosen submit point (used as the starting point for the branch).
A string of maximum 16 characters, mandatory for branch creation.
### Merging changes
You can incorporate updates made on a secondary branch into the main branch or another secondary branch. To ensure successful merging of changes, adhere to the following criteria:
* You can merge the latest version from a secondary branch into either its direct or indirect parent branch.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/merge_child.gif)
* Versions from the Side Menu can be merged, streamlining the process.
* Upon triggering the merge action, a modal window appears, giving the possibility to make the following selection:
* **Branch**: Displays the branches to which the current branch is a child (direct or indirect), if branches contain WIP, they are graded out and they cannot be merged.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/merge_not_possible.png)
* **Message**: A string of maximum 50 characters (limited to letters), numbers and the following characters: \[] () . \_ - /.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/merge_changes.png)
The graph representation is updated to display the new version on the selected parent branch and the merged version is automatically selected, facilitating further development and tracking.
### Managing conflicts
The Conflict Resolution and Version Comparison feature provides a mechanism to identify and address conflicts between two process versions that, if merged, could potentially disrupt the integrity of the process definition.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/conflict.png)
The system displays both the version to be merged and the current version on a single screen, providing a clear visual representation of the differences. Conflicts and variations between the two versions are highlighted, enabling users to readily identify areas requiring attention.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/merge_conflict.gif)
Not all changes are considered conflicts, changes in node positions are not treated as conflicts. Primary causes lie in identifying differences within business rules, expressions, and other scripts.
### Setting published version
You can specify which version will run by default from a container app.
When a process is created, the default published version settings are as follows:
* **Branch**: Main
* **Version**: Work in progress on the Main branch.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/main_wip.png)
You can change the branch and version used as the default by the container app.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/set_publish_info.gif)
This is done through the following settings:
* Branch: A dropdown menu displaying available branches (both opened and merged).
* Version: The type of version that should be used:
* **Latest Work in Progress**
* Displayed if the selected branch is not merged into another branch.
* This configuration is used when there is a work-in-progress (WIP) version on the selected branch or when there is no WIP version on the selected branch due to either work in progress being submitted or the branch being merged.
* In such cases, the latest available configuration on the selected branch is used as the default.
* **Latest Submitted Work**
* Displayed if the selected branch contains submitted versions.
* This configuration is used when there is submitted work on the selected branch, and the current branch has been submitted on another branch (latest submitted work on the selected branch is not the merged version).
* **Custom Version**
* Displayed if the selected branch contains submitted versions.
* Users can select from a dropdown menu containing submitted versions on the selected branch.
* Each option in the dropdown includes:
* Submit message
* Submit date and time
* Submit author
Options are ordered reverse chronologically by submit datetime.
### Read-only state
The Read-Only State feature allows you to access and view submitted versions of your process definitions while safeguarding the configuration from unintended modifications. By recognizing the visual indicators of the read-only state, you can confidently work within a controlled environment, ensuring the integrity of process definitions.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/versioning_read_only.png)
### Audit view
The "Open Audit View" provides you with a detailed audit log of actions related to work-in-progress (WIP) versions of a process. The primary goal is to ensure transparency and accountability for actions taken before the commit or save process.
You can quickly access and review the history of WIP versions, facilitating efficient decision-making and collaboration.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/versioning_audit_log.gif)
More details on how to use the audit log, you can find in the following section:
[FlowX.AI Audit](/4.0/docs/platform-deep-dive/core-extensions/audit)
### Favorites
My Favorites feature allows you to mark processes as favorites. This is a convenient way to identify and prioritize processes that are important to, streamlining your workflow. With the addition of the favorite process feature, we have included a dedicated tab within the process definition list.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/my_favorites.gif)
The "Open Audit View" provides you with a detailed audit log of actions related to work-in-progress (WIP) versions of a process. The primary goal is to ensure transparency and accountability for actions taken before the commit or save process.
In **Favorites** tab, the **Branch** tag will always display the most recently modified branch.
In **My Favorites** tab, the **Branch** tag will always display the most recently modified branch.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/my_favorites_tab.png)
#### Adding a process to favorites
To add a process to the "Favorites" tab, simply follow these steps:
1. Launch **FlowX.AI Designer**.
2. Either open an existing process definition or create a new one.
3. In the top-right corner, click on the breadcrumbs menu and select **Add to favorites**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/fav_add.png)
### Clone process version
The "**Copy version as new process**" feature enables users to duplicate an existing process version as a new process definition. This functionality proves beneficial when initiating work on a use case closely resembling an existing process or when undertaking significant modifications to an existing process definition.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/2024-03-20%2009.54.39.gif)
#### Process creation
After copying the version and creating a new process definition, the following will happen:
* A new process definition with a work-in-progress (WIP) version is created.
* The WIP version is populated with the copied process definition.
* The newly created process is then showcased in the Process Designer's "Definitions" tab, and the copied process definition becomes accessible.
### Exporting and importing process definitions versions
For detailed instructions on exporting and importing process definition versions, please refer to the following section:
# Supported scripts
Scripts are used to define and run actions but also properties inside nodes. For now, the following script languages are supported.
## Business rules scripting
| Scripting Language | Language Version | Scripting Engine | Scripting Engine Version |
| ------------------ | ---------------- | ----------------------------------- | ------------------------ |
| Python | 2.7.0 | Jython | 2.7.3 |
| DMN | 1.3 | camunda.engine.dmn | 7.20.0 |
| MVEL | 2 | org.mvel.mvel2 | 2.5.2.Final |
| Groovy | 3.0.21 | org.codehaus.groovy » groovy-jsr223 | 3.0.21 |
| JavaScript | ECMAScript 5.1 | Nashorn | 15.4 |
## Integration designer scripting
| Scripting Language | Language Version | Scripting Engine | Scripting Engine Version |
| ------------------ | ---------------- | ---------------- | ------------------------ |
| Python | 3.10 | GraalPy | 3.10.8 |
| JavaScript | ES 2019+ | GraalJS | GraalVM 23.0.1+ |
***
## GraalVM CE Native 23.0.1
**GraalVM CE Native 23.0.1** is the community edition of GraalVM with support for multiple languages and AOT (ahead-of-time) compilation to native images. This allows Java and polyglot applications to start quickly and use fewer resources, making it ideal for serverless, CLI, and microservice applications.
### Supported Languages and Versions
* **Java**: JDK 17 base, providing the complete Java language specification and libraries.
* **JavaScript**: GraalJS runtime, with a minimum supported version of **ECMAScript 5** and full compatibility up to **ECMAScript 2022**.
* **Python**: GraalPy, compatible with **Python 3.10** syntax and libraries, enabling seamless integration with Java.
### Properties
* Offers fast startup and low memory usage through Native Image compilation.
* Provides polyglot capabilities, enabling interoperability between languages.
* Available on Linux, macOS, and Windows for a range of deployment environments.
### Useful Links
***
## GraalPy
GraalPy is a Python 3.10-compatible runtime built on **GraalVM**. It allows developers to embed Python into Java applications, providing high-performance execution and interoperability with Java. **GraalPy** supports Python’s standard library and is optimized for environments where Java and Python integration is essential, such as in data processing and AI applications.
### Properties
* Supports **Python 3.10** with most core libraries.
* Enables seamless interoperability with Java classes and functions.
* Offers performance benefits via GraalVM's Just-In-Time (JIT) compiler for optimized execution.
### Useful links:
## GraalJS
GraalJS is the JavaScript runtime within GraalVM that supports **ECMAScript 2022**, bringing modern JavaScript features to the JVM. GraalJS offers full Java interoperability, enabling JavaScript code to interact with Java classes directly. It is compatible with **GraalVM 23.0.1** and later, and it supports both JavaScript and Node.js applications, making it ideal for polyglot projects.
### Properties
* Implements **ECMAScript 2022** for modern JavaScript syntax and features.
* Provides full interoperability with Java, allowing seamless integration of JavaScript and Java code.
* Runs Node.js applications with support for the npm ecosystem.
### Useful links:
***
This documentation provides an overview of GraalVM CE Native 23.0.1’s language support and specific runtime details for each supported language.
## Jython
**Jython** is an implementation of the high-level, dynamic, object-oriented language [Python](http://www.python.org/) seamlessly integrated with the [Java](http://www.javasoft.com/) platform. Jython is an open-source solution.
### Properties
* Supports **Python 2.7** most common python libs can be imported, ex: math, time, etc.
* Java libs can also be imported: [details here ](https://www.tutorialspoint.com/jython/jython_importing_java_libraries.htm)
### Useful links:
## DMN
Decision Model and Notation (DMN) is a standard for Business Decision Management.
FLOWX uses [BPMN.io](https://bpmn.io/) (based on **camunda-engine-dmn** version **7.20.0**) which is built on [DMN 1.3](https://www.omg.org/spec/DMN/1.3/PDF) standards.
### Properties
**camunda-engine-dmn** supports [DMN 1.3](https://www.omg.org/spec/DMN/1.3/PDF), including Decision Tables, Decision Literal Expressions, Decision Requirements Graphs, and the Friendly Enough Expression Language (FEEL)
### Useful links:
**More information:**
## MVEL
MVEL is a powerful expression language for Java-based applications. It provides a plethora of features and is suited for everything from the smallest property binding and extraction, to full-blown scripts.
* FLOWX uses [**mvel2 - 2.4.10 version**](https://mvnrepository.com/artifact/org.mvel/mvel2/2.4.10.Final)
### Useful links
### More information
## Groovy
Groovy is a multi-faceted language for the Java platform. The language can be used to combine Java modules, extend existing Java applications and write new applications
We use and recommend **Groovy 3.0.21** version, using **groovy-jsr223** engine.
**Groovy** has multiple ways of integrating with Java, some of which provide richer options than available with **JSR-223** (e.g. greater configurability and more security control). **JSR-223** is recommended when you need to keep the choice of language used flexible and you don't require integration mechanisms not supported by **JSR-223**.
**JSR-223** (spec) is **a standard scripting API for Java Virtual Machine (JVM) languages** . The JVM languages provide varying levels of support for the JSR-223 API and interoperability with the Java runtime.
### Useful links
## Nashorn Engine (JavaScript)
Nashorn engine is an open source implementation of the [ECMAScript Edition 5.1 Language Specification](https://es5.github.io/). It also implements many new features introduced in ECMAScript 6 including template strings; `let`, `const`, and block scope; iterators and `for..of` loops; `Map`, `Set`, `WeakMap`, and `WeakSet` data types; symbols; and binary and octal literals. It is written in Java and runs on the Java Virtual Machine.
Latest version of **Nashorn** is **15.4**, available from [Maven Central](https://search.maven.org/artifact/org.openjdk.nashorn/nashorn-core/15.4/jar). You can check the [changelog](https://github.com/openjdk/nashorn/blob/main/CHANGELOG.md) to see what's new.
### Useful links
# Token
Token is the concept that describes the current position in the process flow. When you start the process you have a graph of nodes and based on the configuration you will go from one to another based on the defined sequence (connection between nodes).
The token is a [BPMN](../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn) concept that represents a state within a process instance. It keeps track of the current position in the process flow and is used to store data related to the current process instance state.
A token is created each time a new process instance is started. As the actions on the process instance are executed, the token advances from one node to the next. As a node can have several [actions](./actions/actions) that need to be executed, the token is also used for keeping track of the actions executed in each node.
In case of [parallel gateways](./node/parallel-gateway), child tokens are created for each flow branch. The parent token moves to the gateway sync node and only advances after all the child tokens also reach that node.
The image below shows how a token advances through a process flow:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/image%20%28140%29%20%281%29%20%281%29%20%281%29%20%281%29%20%281%29.png)
The token will only move to the next node when there are no more mandatory actions from the current node that need to be executed. The token will also wait on a node in case the node is set to receive an event from an external system through Kafka.
There will be cases when the token needs to be stopped in a node until some input is received from the user. If the input from the user is needed for further advancing in the process, the token should only advance after all data was received. A mandatory manual action can be used in this case and linked to the user action. This way we make sure that the process flow advances only after the user input is received.
## Checking the token status
The current process instance status can be retrieved using the FlowX Designer. It will display some info on the tokens related to that process instance and the current nodes they are in.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/check_token_status.png)
In case more details are needed about the token, you can click the **Process status** view button, choose a token then click the **view button** again:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/token_view_button.gif)
## Token details
* **Token status**: Describes the state of the token in the process.
* **Status Current Node**: Describes the token status in the current node.
* **Retry**: After correcting the errors, you can hit Retry and see if the token moves on.
* **See Token status**: Opens a modal displaying a detailed view of the token status.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/token_status_current_node.png)
If there are parallel gateways configured in a proces, you will have more tokens, created for earch parallel path.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/multiple_tokens_copy.png)
### Token status
| Token Status | Description |
| ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| ACTIVE | the token state is set to active when tokens are created; a parent token is reactivated when all child tokens reach the parallel gateway closing node. |
| INACTIVE | child tokens are set to inactive when they arrive in a parallel gateway closing node; the current token is set to inactive when it reaches a final node. |
| ABORTED | the current token is set to Aborted when it moves backward in order to redo a series of previous actions in the process - it is reset, and a new token is activated. |
| ON HOLD | when a parallel split gateway node is reached, the parent token is set to On hold until all the child tokens reach the parallel gateway closing node; the parent token does not have a "Retry" action icon until all the child tokens are finished. |
| DISMISSED | when the process/subprocess reaches a certain node and it is canceled/exited. |
| EXPIRED | when a defined "expiryTime" in the process definition passes the token will change to this status. |
| TERMINATED | when the process is terminated by a termination request. |
### Status current node
| Status Current Node | Definition |
| ----------------------------- | ----------------------------------------------------------------------------------------------------- |
| ARRIVED | when the token reaches the new node |
| EXECUTING | when the token execution starts |
| EXECUTED\_COMPLETE | after executing node actions, if all the mandatory actions from the node are completed |
| EXECUTED\_PARTIAL | after executing node actions, if there are still mandatory uncompleted actions on it |
| WAITING\_MESSAGE\_EVENT | when the token reaches an intermediate message catch event node, the token will be set to this status |
| WAITING\_TIMER\_EVENT | when the token reaches an intermediate timer event node, the token will be set to this status |
| WAITING\_MESSAGE | when the token waits for a message from another system |
| MESSAGE\_RECEIVED | after the message was received |
| MESSAGE\_RESPONSE\_TIMED\_OUT | if the message was not received in the set timeframe |
### See token status
You can access a detailed view of the token status by going to your Process instance -> Tokens -> View (eye icon):
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/token_status.png)
Here you will find details like:
* **id**: The unique identifier of the token.
* **version**: The version of the token.
* **parentTokenId**: The identifier of the parent token, if any.
* **startNodeId**: The identifier of the node where the token started.
* **embedNodeId**: The identifier of the embedded node, if any.
* **mainSwimlaneId**: The identifier of the main swimlane associated with the token.
* **currentProcessVersionId**: The identifier of the current process version.
* **currentContext**: The current context of the token.
* **initiatorType**: The type of the initiator, if any.
* **initiatorId**: The identifier of the initiator, if any.
* **currentNodeId**: The identifier of the current node associated with the token.
* **currentNodeName**: The name of the current node.
* **state**: The state of the token (for example, INACTIVE, ACTIVE, etc.)
* **statusCurrentNode**: The status of the current node.
* **syncNodeTokensCount**: The count of synchronized node tokens.
* **syncNodeTokensFinished**:The count of finished synchronized node tokens.
* **dateUpdated**: The date and time when the token was last updated.
* **paramValues**: Parameter values associated with the token.
* **processInstanceId**: The identifier of the process instance.
* **currentNode**: Details of the current node.
* **nodesActionStates**: An array containing information about action states of nodes associated with the token.
* **uuid**: The unique identifer id of the token.
# Dynamic & computed values
In modern application development, the ability to create dynamic and interactive user interfaces is essential for delivering personalized and responsive experiences to users. Dynamic values and computed values are powerful features that enable developers to achieve this level of flexibility and interactivity.
## Dynamic values
Dynamic values exemplify the capacity to dynamically populate various properties of UI elements in response to process parameters or substitution tags. This dynamic customization occurs at runtime, allowing the application to adapt to specific scenarios or user inputs. With dynamic values, the UI can be tailored down to details, including labels, placeholders, error messages, and other properties.
This capability is now extended to an array of UI elements and their corresponding properties, utilizing process parameters or [**substitution tags**](../../platform-deep-dive/core-extensions/content-management/substitution-tags):
| Element | Property | Accepts Params/Subst Tags |
| --------------------------------------------------------- | ----------------------------- | ------------------------- |
| [**Form Elements**](./ui-component-types/form-elements) | Default Value (except switch) | Yes |
| | Label, Placeholder | Yes |
| | Helper Text, Validators | Yes |
| [**Document Preview**](./ui-component-types/file-preview) | Title, Subtitle | Yes |
| [**Card**](./ui-component-types/root-components/card) | Title, Subtitle | Yes |
| Form | Title | Yes |
| Message | Message | Yes |
| [**Buttons**](./ui-component-types/buttons) | Label | Yes |
| Select, Checkbox, Radio,Segmented Button (Static) | Label, Value | Subst Tags Only |
| Text | Text | Yes |
| Link | Link Text | Yes |
### Example using Substitution tags
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release-notes/dynamic_val.gif)
### Example using process parameters
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/dynamic_values_params.gif)
#### Business rule example
In the preceding example, an MVEL business rule demonstrates the population of specific keys with dynamic values from the task. This JSON object, assigned to the "app" key, captures the values for various UI properties:
```json
///assigning a JSON object containing dynamic values for the specified keys to the "app" key
output.put("app",{"label":"This is a label",
"title":"This is a title",
"placeholder":"This is a placeholder",
"helpertext":"This is a helper text",
"errorM":"This is a error message",
"prefix":"prx",
"suffix":"sfx",
"subtitile":"This is a subtitle",
"message":"This is a message",
"defaultV":"defaultValue",
"value":"Value101",
"value":"Value101",
"confirmLabel":"This is a confirm label",
"cancelLabel":"This is a cancel label",
"defaultValue":"dfs",
"defaultDate":"02.02.2025",
"defaultSlider": 90});
```
Note that for releases **\< 3.3.0**, concatenating process parameters with substitution tags isn't supported when utilizing dynamic values.
## Computed values
Computed values present a method to dynamically generate values using JavaScript expressions. Beyond adhering to predefined values, computed values enable the manipulation, calculation, and transformation of data grounded in particular rules or conditions.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/computed1.png)
Computed values can be created via JavaScript expressions that operate on process parameters or other variables within the application.
To introduce a computed value, you simply toggle the "Computed value" option (represented by the **f(x)** icon). This will transform the chosen field into a JavaScript editor.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/computed_default_value.png)
By enabling computed values, the application provides flexibility and the ability to create dynamic and responsive user interfaces.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release-notes/computed.gif)
### Slider example (parsing keys as integers)
The instance above showcases computed values' usage in a Slider element. JavaScript expressions are used to dynamically compute minimum and maximum values based on a value sourced from a linked input UI element (connected via the process key `${application.client.amount}`).
#### Minimum Value
```js
if ( !isNaN(parseInt(${application.client.amount})) ) {
return 0.15 * parseInt(${application.client.amount})
} else {
return 10000
}
```
* `!isNaN(parseInt(${application.client.amount}))`: This part ascertains whether the value in the input field `(${application.client.amount})` can be effectively converted to an integer using `parseInt`. Moreover, it validates that the outcome isn't `NaN` (i.e., not a valid number), ensuring input validity.
* If the input is a valid number, the minimum value for the slider is calculated as 15% of the entered value `(0.15 * parseInt(${application.client.amount}))`.
* If the input is not a valid number `(NaN)`, the minimum value for the slider is set to 10000.
#### Maximum Value
```js
if ( !isNaN(parseInt(${application.client.amount})) ) {
return 0.35 * parseInt(${application.client.amount})
} else {
return 20000
}
```
* Similar to the previous expression, it checks if the value entered on the input field is a valid number using `!isNaN(parseInt(${application.client.amount}))`.
* If the input is a valid number, the maximum value for the slider is calculated as 35% of the entered value `(0.35 * parseInt(${application.client.amount}))`.
* If the input is not a valid number `(NaN)`, the maximum value for the slider is set to 20000.
#### Summary
In summary, the above expressions provide a dynamic range for the slider based on the value entered on the input field. If a valid numeric value is entered, the slider's range will be dynamically adjusted between 15% and 35% of that value. If the input is not a valid number, a default range of 10000 to 20000 is set for the slider. This can be useful for scenarios where you want the slider's range to be proportional to a user-provided value.
### Text example (using computed strings)
The following scenario outlines the functionality and implementation of dynamically displayed property types via a text UI element. This is done based on the chosen loan type through a select UI element in a user interface.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/dynamic_string.gif)
#### Scenario
The UI in focus showcases two primary UI elements:
* Select Element - "Loan type": This element allows users to choose from different loan types, including "Conventional," "FHA," "VA," and "USDA."
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/loan_type.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/select_values.png)
* Text Element - "Property type": This element displays property types based on the selected loan type.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/property_type.png)
The following code snippet illustrates how the dynamic property types are generated based on the selected loan type (JavaScript is used):
```javascript
if ("${application.loanType}" == "conventional") {
return "Single-Family Home, Townhouse CondoMulti-Family, Dwelling";
} else if ("${application.loanType}" == "fha") {
return "Single-Family Home, Townhouse, Condo, Manufactured Home";
} else if ("${application.loanType}" == "va") {
return "Single-Family Home, Townhouse, Condo, Multi-Family Dwelling";
} else if ("${application.loanType}" == "usda") {
return "Single-Family Home, Rural Property, Farm Property";
} else {
return "Please select a loan type first";
}
```
#### Summary
* **Loan Type Selection**: Users interact with the "Loan Type Select Element" to choose a loan type, such as "Conventional," "FHA," "VA," or "USDA."
* **Property Types Display**: Once a loan type is selected, the associated property types are dynamically generated and displayed in the "Text Element."
* **Fallback Message**: If no loan type is selected or an invalid loan type is chosen, a fallback message "Please select a loan type first" is displayed.
### Integration across the UI elements
The UI Designer allows the inclusion of JavaScript expressions for generating computed values. This functionality extends to the following UI elements and their associated properties:
| Element | Properties |
| -------------------------------------- | ----------------------------------- |
| Slider | min Value, max Value, default Value |
| Input | Default Value |
| Any UI Element that accepts validators | min, max, minLength, maxLength |
| Text | Text |
| Link | Link Text |
* **Slider**: The min value, max value, and default value for sliders can be set using JavaScript expressions applied to process parameters. This allows for dynamic configuration based on numeric values.
* **Any UI Element that accepts validators min, max, minLength, maxLength**: The "params" field for these elements can also accept JavaScript expressions applied to process parameters. This enables flexibility in setting validator parameters dynamically.
* **Default Value**: For input elements like text inputs or number inputs, the default value can be a variable from the process or a computed value determined by JavaScript expressions.
* **Text**: The content of a text element can be set using JavaScript expressions, allowing for dynamic text generation or displaying process-related information.
* **Link**: The link text can also accept JavaScript expressions, enabling dynamic generation of the link text based on process parameters or other conditions.
When working with computed values, it's important to note that they are designed to be displayed as integers and strings.
For input elements (e.g., text input), you may require a default value from a process variable, while a number input may need a computed value.
# Layout configuration
Layout settings are available for all components that can group other types of elements (for example, Containers or Cards).
The layout configuration settings enable users to customize key properties, including layout direction, alignment, gap, sizing, and spacing, to create visually appealing and functional designs.
These settings can be applied practically in various ways, depending on the context and purpose of the design:
* **Layout Direction and Alignment**: Use these settings to control how content is displayed, ensuring a logical and visually appealing arrangement. For example, a left-to-right layout direction suits languages that read from left to right, while center alignment is ideal for headings or titles.
* **Gap, Sizing, and Spacing**: Manage the distance between elements to create a sense of hierarchy and balance. Adjusting spacing between paragraphs or sections can improve readability, while resizing elements can prioritize certain components within the design.
* **Accessibility Considerations**: Customizing layout direction, alignment, spacing, and sizing can enhance the accessibility and inclusivity of your design. For example, adjusting these settings can make content more readable for users with disabilities or those using assistive technologies.
## Linear layout
The Linear layout arranges child elements in a single line, either horizontally or vertically.
### Linear layout configuration properties
**Linear**: Selected by default for arranging child elements in a linear fashion.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/set_linear.png)
* **Horizontal**: Aligns child elements horizontally in a row.
* **Vertical**: Aligns child elements vertically in a column.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/linear_direction.gif)
Controls the alignment of child elements along the main axis (the direction set by Horizontal or Vertical).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/linear_justify.gif)
Options include:
* **Start**: Aligns elements at the start of the container.
* **Center**: Centers elements along the main axis.
* **End**: Aligns elements at the end of the container.
* **Space Between**: Distributes elements evenly with space between them.
* **Space Around**: Distributes elements with space around them.
* **Space Evenly**: Distributes elements with equal space around them.
Controls the alignment of child elements along the cross axis (perpendicular to the main axis).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/linear_align.gif)
Options include:
* **Start**: Aligns elements at the start of the cross axis.
* **Center**: Centers elements along the cross axis.
* **End**: Aligns elements at the end of the cross axis.
* **Stretch**: Stretches elements to fill the container along the cross axis.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/linear_wrap.gif)
* **Enabled** (Yes): This option allows elements to wrap onto multiple lines when they exceed the container's width along the main axis, ensuring a flexible and responsive layout.
* **Disabled** (No): This option forces all elements to remain on a single line, even if they overflow beyond the container's width, potentially causing elements to be clipped or hidden.
Sets the spacing between child elements, measured in pixels.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/linear_gap.gif)
To better understand how these layout configurations work and see real-time feedback on different combinations, please refer to the following link:
## Grid layout
In addition to linear (flex-based) layouts, you can configure components using a grid layout. This option is particularly useful for designs requiring a more structured and multi-dimensional arrangement of elements.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/grid_layout.png)
### Switching layout types
For components that contain child elements (such as Containers, Cards, and Forms), you can easily switch between a linear layout and a Grid layout using a layout picker. The default layout is linear, but you can select Grid to arrange your content in a more structured way.
### Platform-specific layouts
You can configure different layout settings for different platforms. For example, you can opt for a grid layout on the web but switch to a linear layout on mobile for a more streamlined user experience. This flexibility ensures that the design is responsive and optimized for each platform, improving usability across different devices.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/layout_platform_specific.gif)
#### Configuring the grid
Once the Grid layout is selected, you can define the number of columns. Initially, the columns will be distributed equally, meaning all columns will have the same width. Additional customization options include:
* **Number of Columns**: Set the number of columns for the grid. By default, grid will use two columns.
* **Alignment**: Control how child elements are aligned within the grid. Alignment can be adjusted both horizontally and vertically, ensuring that content is positioned exactly where needed.
* **Gap Configuration**: Customize the gap between columns and rows. The default gap is inherited from the parent component's theme settings, ensuring consistency across your design.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/grid_specs.png)
#### Grid child settings
When configuring a grid, individual child components can be further customized:
* **Column Span**: Set how many columns a child component should span across.
Col Span should not exceed the number of parent grid columns.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/grid_layout_child_settings.gif)
* **Row Span (Web Only)**: Set how many rows a child component should span.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/grid_layout_child_settings1.gif)
#### Order property
The Order property controls the arrangement of elements within grid layouts. By default, elements follow the order in which they appear in the parent component's array. However, this order can be customized using the following options:
* **Auto**: The default setting that retains the order of elements as defined in the array (default value is `0`).
* **First**: Moves the element to the first position by setting its order to `-999`.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/order_first.gif)
* **Last**: Moves the element to the last position by setting its order to `999`.
* **Manual**: Allows for custom ordering by setting a specific numerical value.
Example: In a Grid with 10 elements, setting one element's order to `5` while the others are set to `auto` (which equals `0`) will place that element last, because `5` is greater than `0`.
#### Alignment in grid layouts
Proper alignment within grid layouts is essential, particularly when working with fixed-width columns or rows. By default, child elements will inherit alignment settings from their parent component, ensuring a consistent look and feel.
However, you have the flexibility to override these default settings for individual child elements, allowing precise control over horizontal and vertical alignment. This customization is key to achieving the desired visual structure, especially when certain elements need specific positioning within the grid.
### Component details
* **Components with gridLayout Property**: The following components support the `gridLayout` property:
* [**CONTAINER**](./ui-component-types/root-components/container)
* [**CARD**](./ui-component-types/root-components/card)
* **FORM**
* [**COLLECTION**](./ui-component-types/collection)
* [**COLLECTION PROTOTYPE**](./ui-component-types/collection/collection_prototype)
* **Components with gridChild Property**: All components can have the `gridChild` property, depending on their parent component's layout.
* **Collection gridLayout**: Within collections, the `gridLayout` property specifies only the number of columns, rowGap, and columnGap.
* **Collection Prototype**: The collection prototype does not include the `gridChild` property.
* **Grid and Flex Layout Application**: The grid and flex layouts apply exclusively to the layout of child components. Elements such as Card titles, subtitles, and Form titles are excluded from these layout types.
### Default style properties
* **Grid Layout Defaults**:
* **Columns**: 2
* **ColumnGap**: Inherited from component theming gap
* **RowGap**: Inherited from component theming gap
* **Position**: Start Start (aligning items to the start both horizontally and vertically)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/grid_layout_defaults.png)
* **Grid Child Defaults**:
* **Position**: Start Start (aligned to the start of both axes)
* **ColumnSpan:** 1 (spans a single column)
* **RowSpan (Web)**: 1 (spans a single row)
* **Order**: 0 (Auto, maintaining the default order of elements)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/grid_child_defaults.png)
***
### Example using grid layout
**Use case**: **Customer Information Form**
This scenario involves a banking application that requires detailed input from a company representative, either during onboarding or for a loan application. Given the need to gather extensive information, a form with a 2-column grid layout is used to ensure clean and consistent alignment of fields.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/example_gridd.png)
For forms requiring additional inputs, consider adjusting the **Columns** property to match the number of fields needed. Expanding the form layout will help organize the inputs efficiently and improve user experience.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/more_columns.gif)
* Collect the company representative’s role and name.
* A 2-column grid layout can display the *Role* in one column and *Name* in the other, ensuring fields are aligned and easy to access.
* A toggle or checkbox asks if the representative is the main shareholder and/or the ultimate beneficial owner.
* A 2-column grid places these toggles side by side for a cleaner interface.
* Fields like *First Name*, *Personal Identification Number*, *Date of Birth*, and *Country of Residence* are collected.
* The 2-column grid aligns these fields horizontally, with appropriate spacing for readability.
* Collects *Phone Number*, *Email*, and *Preferred Method of Contact*.
* The grid places *Phone Number* and *Email* side by side, while the *Preferred Method of Contact* dropdown spans the row.
* Collect *State*, *Country*, and *City*.
* A 3-column grid displays these fields in a single row, simplifying navigation for the user.
* Capture employment history such as *Total Service Duration* and *Verified By*.
* A 2-column grid aligns these fields for easy comparison, avoiding misalignment or unnecessary gaps.
* A simple yes/no radio button for union membership.
* A grid layout ensures the options are neatly aligned for a clean, straightforward selection.
***
### Best Practices
To ensure an optimal user experience and consistent visual design, consider these best practices when using grid layouts:
* **Minimize Fixed Widths**: Avoid using fixed widths for child components in a grid. Relying on flexible sizing allows the grid to adapt smoothly across different screen sizes and devices. Only use fixed widths when absolutely necessary (e.g., for icons or buttons with defined sizes).
* **Consider Total Width**: If you know the width of the screen is `X`, ensure that the total width of your fixed-width elements doesn't exceed `X`. This prevents layout issues like content overflow or misalignment across different devices.
* **Use Column Span Thoughtfully**: When setting the `columnSpan` for a grid child, ensure that the total spans across all elements do not exceed the number of columns. This ensures a consistent and predictable layout.
* **Avoid Overcrowding**: When adding numerous child components to a grid, be mindful of spacing (gaps) between elements. Proper use of `rowGap` and `columnGap` helps maintain clarity and reduces visual clutter.
* **Leverage Default Inheritance**: Utilize the default alignment inheritance from the parent grid to ensure consistency. Only override alignment on child components when specific visual differences are needed.
* **Use `Auto` Order When Possible**: Stick with the `auto` order setting for most child components, unless a specific element needs to appear first or last. This will help maintain logical reading and visual flow without complex manual ordering.
* **Always Be Mindful of Mobile Screen Width**: When designing for mobile, always consider the narrower screen width. Ensure that grid layouts adapt gracefully, perhaps switching from a grid to a linear layout, or adjusting spacing and sizing to fit within the mobile screen's constraints without causing overflow or misalignment.
* **Test Across Platforms**: Given potential differences in behavior between web, iOS, and Android platforms, test your grid layout across these platforms to ensure consistent performance and avoid unexpected behavior like overflow or misalignment.
* **Avoid Fixed Widths on Columns**: Instead of setting fixed widths on the entire column, apply fixed widths to the individual elements inside the grid cells. This ensures more flexible layouts that adapt to different screen sizes, especially on mobile devices, without causing issues like overflow or misalignment.
### FAQs
**A:** The order property works relative to other elements. If other elements are set to `auto` (which equals `0`), then the element with order `5` will be placed after all the `0` elements, making it appear last.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/FAQ1.gif)
**A:** Be careful with using fixed widths in Grid layouts. Fixed widths can lead to unexpected behavior, especially on different devices. It's better to use flexible sizing options whenever possible to maintain a responsive and predictable design.
# Localization and internationalization
FlowX localization and internationalization adapt applications to different languages, regions, and formats, enhancing the user experience with dynamic date, number and currency formatting.
## Internationalization
Internationalization (i18n) in FlowX enables applications to be easily adapted for multiple languages and regions without altering the core code. It sets the foundation for localization by handling language structure, layout adjustments, and supporting various formats.
To set the default language at the application level, navigate to **Projects -> Application -> Settings**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application_language.png)
## Localization
Locale settings impact all date, number, and currency formats based on the combination of region and language. The language dictates translations, while the region specifies formatting order and conventions.
### Locale sources
Locale settings are derived from two main sources:
* **Container Application**: Provides global locale settings across the application.
If not specified during deployment, the default locale will be set to `en-US`.
* **Application Level**: Enables context-specific overrides within the application for formatting Numbers, Dates, and Currencies.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/app_lvl_format.png)
## Core i18n & l10n features
### Date formats
The default date format in FlowX is automatically determined by the default locale set at the application or system level. Each locale follows its region-specific date convention.
For example:
* en-US locale (United States): `MM/DD/YYYY` → 09/28/2024
You can set date formats at the application level (as you can see in the example above), choosing from five predefined options: short, medium, long, full, or custom (e.g., dd/mm/yy).
Additionally, date formats can be overridden at the UI Designer level for specific UI elements that support date customization.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/override_custom.png)
UI Elements supporting date formats:
* Text
* Link
* Message
* Datepicker
FlowX will apply the following formatting options, adapting to the region's standard conventions:
| Format Type | Format Pattern | Example | Description |
| ----------- | --------------------- | ------------------------------ | ------------------------------------------ |
| **Short** | `MM/dd/yy` | `08/28/24` | Month before day, two-digit year. |
| **Medium** | `MMM dd, yyyy` | `Sep 28, 2024` | Abbreviated month, day, four-digit year. |
| **Long** | `MMMM dd, yyyy` | `September 28, 2024` | Full month name, day, and four-digit year. |
| **Full** | `EEEE, MMMM dd, yyyy` | `Thursday, September 28, 2024` | Full day of the week, month, day, year. |
| **Custom** | `dd/MM/yyyy` | `28/08/2024` | User-defined format; day before month. |
The date formats shown in the example are based on the **en-US (United States English) locale**, which typically uses the month-day-year order.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/date_formats.png)
If the predefined date formats do not met your needs you can declare and use a custom date format (both at application level or override in UI Designer).
The following example will demonstrate how to set and display various date formats using text UI elements. We will override the default format (originating from the application level) directly in the UI Designer to showcase all available formats:
FlowX formats dates using [**ISO 8601**](https://www.iso.org/iso-8601-date-and-time-format.html) for consistent and clear international representation.
### Number formatting
FlowX adjusts number formats to align with regional standards, including the use of appropriate decimal separators and digit grouping symbols. This ensures that numbers are displayed in a familiar format for users based on their locale.
#### Locale-specific formatting
Formatting the numbers to adapt decimal separators (`comma` vs. `dot`) and digit grouping (`1,000` vs. `1.000`) to match regional conventions.
The correct formatting for a number depends on the locale. Here's a quick look at how the same number might be formatted differently in various regions:
* **en-US (United States)**: 1,234.56 (comma for digit grouping, dot for decimals)
* **de-DE (Germany)**: 1.234,56 (dot for digit grouping, comma for decimals)
* **fr-FR (France)**: 1 234,56 (space for digit grouping, comma for decimals)
* **es-ES (Spain)**: 1.234,56 (dot for digit grouping, comma for decimals)
#### Decimal precision
You can set minimum and maximum decimal places for numbers in application settings to store data and display it with the required precision in the data store.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/app_number_formatting.png)
Formatting settings defined in the FlowX props in the UI Designer take precedence over the application-level formatting settings.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/min_max_decimals.png)
For UI elements that support `number` or `currency` types, FlowX checks the data model for precision settings and applies them during data storage, ensuring consistency to configured precision levels.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/darta_model_precision.png)
This means that when using a `float` data model attribute, the precision settings directly control how the number is saved in the data store, specifying the maximum number of decimal places that can be stored.
Additionally, an for input UI elements a mask is applied to prevent users from entering more decimal places than the precision set, ensuring data integrity and compliance with defined formatting rules.
For more details, refer to [Data model integration](#data-model-integration).
* **Minimum Decimals**: Sets the least number of decimal places that a number will display. If a number has fewer than the specified decimals, trailing zeros will be added to meet the requirement.
* **Maximum Decimals**: Limits the number of decimal places a number can have. If the number exceeds this limit, it will be rounded to the specified number of decimals.
**Example**:
If you set both Minimum and Maximum Decimals to 3, a number like 2 would display as 2.000, and 3.14159 would be 3.141.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/min_max_decimals_ex.gif)
At runtime, the system applies number formatting rules based on the locale and the settings defined in the application's configuration or UI Designer overrides.
If the number is linked to a data model key the formatting adheres to the metadata defined there, ensuring consistent rendering of numerical data.
### Currency formatting
FlowX provides currency formatting features that dynamically adapt to regional settings, ensuring accurate representation of financial data.
#### Currency object structure
Currencies are managed as a system object with `amount` and `code` keys, creating a wrapper that facilitates consistent handling. This design ensures that every currency display corresponds accurately to the regional and formatting standards defined by the locale. If the `code` is not provided, the system uses the locale to determine the appropriate currency symbol or code.
#### Display behavior
When displaying currency values, the system expects keys like `loanAmount` to have both `amount` and `code` properties. For example, with the locale set to `en-US`, the output will automatically follow the US formatting conventions, displaying amounts like "\$12,000.78" when the currency is USD.
* If the value found at the key path is not an object containing `amount` or `code`, the system will display the value as-is if it is primitive. If it's an object, it will show an empty string.
* If the key path is not defined, similar behavior applies: primitive values are displayed as-is, and objects result in an empty string.
#### Locale-sensitive formatting
Currency formatting depends primarily on the region defined by the locale, not the language.
When the currency `code` is `null`, the system defaults to the currency settings embedded within the locale, ensuring region-specific accuracy.
#### Dynamic formatting in UI
FlowX dynamically applies number formatting within UI components, such as inputs and sliders. These components adjust in real-time based on the current locale, ensuring that users always see numbers in their preferred format. UI components that support currency values dynamically format them in real time. For example:
* Input fields with `CURRENCY` types save values in `{key path}.amount` and will delete the entry from the data store if the input is empty.
* Sliders will save integer values (with no decimals) to `{key path}.amount` and format the displayed values, including min/max labels, according to the locale and currency.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/slider_min_max.gif)
**Formatting Text with locale and data from UI elements**
This example demonstrates how to format a dynamic text by substituting placeholders with values from a data store, taking into account locale-specific formatting. In this case, the goal is to insert the value coming from a key called loanAmount (added on an Input UI element) into a text and format it according to the en-US locale.
**Data store**:
A JSON object containing key-value pairs. The `loanAmount` key holds a currency object with two properties:
* amount: The actual loan value (a float or number).
* code: The currency code in ISO format (e.g., "USD" for US Dollars).
```json{
"loanAmount": {
"amount": 12000.78,
"code": "USD"
}
}
```
**Locale**
The locale determines the number and currency formatting rules, such as the use of commas, periods, and currency symbols.
Locale: en-US (English - United States)
**Processing**
* Step 1: The platform extracts the loanAmount.amount value (12000.78) and the loanAmount.code from the data store.
* Step 2: Format the amount according to the specified locale (en-US). In this locale:
* Use a comma , to separate thousands.
* Use a period . to denote decimal places.
* Step 3: Replace the `${loanAmount}` placeholder in the text with the formatted value \$12,000.78.
**Output**
The resulting text with the formatted loan amount and the appropriate currency symbol for the en-US locale.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/loanAmount.gif)
Here is the data extracted from the data store (it is available on the process instance)
For instance, with a Romanian locale (`ro-RO`), currency is displayed like "12.000,78 RON" when the `code` is null or unavailable.
## Data model integration
FlowX integrates number formatting directly with the data model. When keys are set to number or currency types, the system automatically checks and applies precision settings during data storage. This approach ensures consistency across different data sources and UI components.
Integration Details:
* Numbers are broken down into integer and decimal parts, stored according to specified precision.
* Currency keys are managed as wrapper objects containing amount and code, formatted according to the locale settings.
## UI Designer integration
Formatting is dependent on the locale, which includes both language and region settings. The application container provides the locale for display formatting.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/locale_language_ui_designer.png)
Formatting settings (which is by default `auto`) can be switched to `manual` and overridden for text, message, and link components within the UI Designer's properties tab.
The UI Designer enables application-level formatting settings for localized display of dynamic values.
## Supported components
* **Text/Message/Link**: Override general localization settings to match localized content presentation.
* **Input Fields**: Inputs correctly format currency and number types based on regional settings.
* **Sliders**: Currency formatting for sliders, with suffixes disabled when currency types are detected.
* **Datepickers**: Date formatting
# UI actions
Multiple UI elements can be linked to a specific action via a UI action. If the action is just a method to interact with the process, the UI Action adds information about how that UI should react. For example, should a loader appear after executing the action, should a modal be dismissed, or if some default data should be sent back to the process.
UI actions create a link between an [**action**](../actions/actions) and a UI component or [**custom component**](./ui-component-types/root-components/custom).
The UI action informs the UI element to execute the given action when triggered. Other options are available for configuration when setting an action to a button and are detailed below.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/ui_actions.gif)
There are two main types of UI Actions:
* [Process UI Actions](#process-ui-actions)
* [External UI Actions](#external-ui-actions)
## Process UI actions
This is a UI action that describes how a [**Button**](../ui-designer/ui-component-types/buttons) (generated or custom) should interact with a process Manual action.
First, we need to configure the [manual action](../actions/actions) that will be referred from the UI Action. For more information on how to add an action to a node, and how to configure it, check the following section:
### Manual action configuration example - Save Data
1. Add an **action** to a node.
2. Select the action type - for example **Save Data**.
3. The action **type** should be **manual**.
4. **Keys** - it has two important implications:
* First, this is a prefix of the keys that will send back by the UI Action link to this action. For example, if we have a big form with a lot of elements, but we need an action that just sends the email back (maybe creating email validation functionality) we will add just the key of that field: `application.client.email`; if we need a button that will send back all the form elements that have keys that start with `application.client` we can add just this part
* Second, a backend validation will be run to accept and persist just the data that start with this prefix. If we have three explicit keys, `application.client.email`, `application.client.phone`, `application.client.address` and we send `application.client.age`this key will not be persisted
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/ui_action_key.png)
When this prerequisite is ready we can define the UI Action.
UI Actions and [Actions](../actions/actions) must be configured on the same node.
### UI action configuration example
Multiple configurations are available - **ACTION** type example:
* [**Event**](#events)
* [**Type**](#action-types)
* **Node Action Name** - dropdown with available actions for this node. If the dropdown is empty please add a manual action that is required before we create the UI Action
* **Use a different name for UI action**
* **UI action name** - **this becomes** important when the action is used in a [**Custom component**](./ui-component-types/root-components/custom) as it will be used to trigger the action. For UI actions added on a generated button component this name is just descriptive
* **Custom body** - this is the default response in JSON format that will be merged with any extra parameters added explicitly when executing the action, by a web application (from a [custom component](./ui-component-types/root-components/custom))
* **Forms to validate** - select from the dropdown what element will be validated (you can also select the children)
* **Dismiss process** - if the UI Actions is added on a subprocess and this parameter is true, triggering this UI action will dismiss the subprocess view (useful for modals subprocess)
* **Show loader?** - a loader will be displayed if this option is true until a web-socket event will be received (new screen or data)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/ui_actions_multiple_configs.png)
## UI actions elements
### Events
You can add an event depending on the element that you select. There are two events types available: **CLICK** and **CHANGE**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/ui_action_events.gif)
Not available for UI actions on [Custom components](./ui-component-types/root-components/custom).
### Action types
The **action type** dropdown will be pre-filled with the following UI action types:
* DISMISS - used to dismiss a modal after action execution
* ACTION - used to link an action to a UI action
* START\_PROCESS\_INHERIT - used to inherit data from another process
* UPLOAD - used to create an un upload action
* [EXTERNAL](./ui-actions#external-ui-actions) - used to create an action that will open a link in a new tab
## External UI actions
Used to create an action that will open a link in a new tab.
If we toggle the EXTERNAL type, a few new options are available:
1. **URL** - web URL that will be used for the external action
2. **Open in new tab** - this option will be available to decide if we want to run the action in the current tab or open a new one
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/ui_action_external.png)
For more information on how to add actions and how to configure a UI, check the following section:
[Managing a process flow](../../flowx-designer/managing-a-process-flow)
# Buttons
There are two types of buttons available, each with a different purpose.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/basic_buttons.png#center)
## Basic button
Basic buttons are used to perform an action such as unblocking a token to move forward in the process, sending an OTP, and opening a new tab.
### Configuring a basic button
When configuring a basic button, you can customize the button's settings by using the following options:
* [**Properties**](#properties)
* [**UI action**](#ui-action)
* [**Button styling**](#button-styling)
Sections that can be configured regarding general settings:
#### Properties
* **Label** - it allows you to set the label that appears on the button
#### UI action
Here, you can define the UI action that the button will trigger.
* **Event** - possible value: `CLICK`
* **Action Type** - select the action type
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/button1.png)
More details on how to configure UI actions can be found [here](../ui-actions).
### Button styling
#### Properties
This section enables you to select the type of button using the styling tab in the UI Designer. There are four types available:
* Primary
* Secondary
* Ghost
* Text
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/button_type.gif)
For more information on valid CSS properties, click [here](../ui-designer#styling).
#### Icons
To further enhance the Button UI element with icons, you can include the following properties:
* **Icon Key** - the key associated in the Media library, select the icon from the **Media Library**
* **Icon Color** - select the color of the icon using the color picker
When setting the color, the entire icon is filled with that color, the SVG's fill. Avoid changing colors for multicolor icons.
* **Icon Position** - define the position of the icon within the button:c
* Left
* Right
* Center
When selecting the center position for an icon, the button will display the icon only.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/button_icon.png)
By utilizing these properties, you can create visually appealing Button UI elements with customizable icons, colors, and positions to effectively communicate their purpose and enhance the user experience.
## File upload
This button will be used to select a file and do custom validation on it. Only the Flowx props will be different.
### Configuring a file upload button
When configuring a file upload button, you can customize the button's settings by using the following options:
* [**Properties**](#properties)
* [**UI action**](#ui-action)
* [**Button styling**](#button-styling)
Sections that can be configured regarding general settings:
#### Properties
* **Label** - it allows you to set the label that appears on the button
* **Accepted file types** - the accept attribute takes as its value a string containing one or more of these unique file type specifiers, [separated by commas](https://html.spec.whatwg.org/multipage/common-microsyntaxes.html#set-of-comma-separated-tokens), may take the following forms:
| Value | Defintion |
| ------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------- |
| audio/\* | Indicates that sound files are accepted |
| image/\* | Indicates that image files are accepted |
| video/\* | Indicates that video files are accepted |
| [MIME type](https://html.spec.whatwg.org/multipage/infrastructure.html#valid-mime-type-with-no-parameters) with no params | Indicates that files of the specified type are accepted |
| string starting with U+002E FULL STOP character (.) (for example, .doc, .docx, .xml) | Indicates that files with the specified file extension are accepted |
* **Invalid file type error**
* **Max file size**
* **Max file size error**
Example of an upload file button that accepts image files:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/file_upload_img.png)
#### UI action
Here, you can define the UI action that the button will trigger.
* **Event** - possible value: `CLICK`
* **Action Type** - select the action type
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/file_upload_action.png)
More details on how to configure UI actions can be found [here](../ui-actions).
### Button styling
The file upload button can be styled using valid CSS properties (more details [here](../ui-designer#styling)).
# Collection component
The Collection component functions as a versatile container element, allowing you to iterate through a list of elements and display them according to their specific configurations.
## Configurable properties
* `collectionSource`: This property specifies the process key where a list can be found. It should be a valid array of objects.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/collection_source_key1.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/%20collection_source_key.png)
## Example usage
Here's an example of configuring a Collection component to display a list of products:
![Collection configuration for displaying employees](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/collection_example.png)
Source collection data example using an [**MVEL business rule**](../../../actions/business-rule-action/business-rule-action):
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/collection_mvel.png)
```java
output.put("processData", //this is the key
{
"products": [ // this is the source that will populate the data on collection
{
"name": "Product One Plus",
"description": "The plus option",
"type": "normal"
},
{
"name": "Product Two Premium",
"description": "This is premium product",
"type": "recommended"
},
{
"name": "Product Basic",
"description": "The most basic option",
"type": "normal"
},
{
"name": "Gold Product",
"description": "The gold option",
"type": "normal"
}
]
}
);
```
The above configuration will render the Collection as shown below:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/render_collection.gif)
Components within a collection use **relative paths** to the collection source. This means that wherever the collection is found inside the process data, the components inside the collection need their keys configured relative to that collection.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/collection_relative_key.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/collection_key_rule.png)
To send and display dynamic data received on the keys you define to the frontend, make sure to include the following data structure in your root UI element using Message parameters. For instance, if you want to include data for the `processData` key mentioned earlier, your configuration should resemble this:
```json
{
"processData": ${processData}
}
```
Please note that JavaScript expressions for hiding or disabling UI components cannot be applied within elements inside a Collection. Ensure your UI logic accounts for this limitation.
To enable the definition of multiple prototypes for a single Collection and display elements from the same collection differently, an additional container known as a *collection prototype* is required. For more information on collection prototypes, please refer to the next section:
# Collection prototype
A Collection prototype is an additional container type that allows you to define multiple prototypes for a single Collection. This feature enables you to display elements from the same collection differently.
Imagine you are designing a user interface where users can browse a list of products, with one product deserving special emphasis as the recommended choice. In this scenario, you can employ a collection containing different collection prototypes, each tailored for regular and recommended products
## Configurable properties
1. **Prototype Identifier Key** - This key instructs the system on where to look within the iterated object to identify the prototype to display. In the example below, the key is "type."
2. **Prototype Identifier Value** - This value should be present at the Prototype Identifier Key when this `COLLECTION_PROTOTYPE` should be displayed. In the example below, the value can be "normal" or "recommended."
## Example
Let's revisit the example we used in the Collection section:
![Collection prototype for normal product](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/c.prototype1.png)
![Collection prototype for recommended product](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/c.prototype2.png)
Sample source collection data for products:
```java
output.put("processData",
{
"products": [
{
"name": "Product One Plus",
"description": "The plus option",
"type": "normal"
},
{
"name": "Product Two Premium",
"description": "This is premium product",
"type": "recommended" // source for type - recommended
},
{
"name": "Product Basic",
"description": "The most basic option",
"type": "normal" //source for type - normal
},
{
"name": "Gold Product",
"description": "The gold option",
"type": "normal"
}
]
}
);
```
The above configuration will render:
![Collection with two prototypes as rendered by the SDK](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/render_collection.gif)
## Adding elements with UI Actions
When configuring elements that incorporate UI Actions there are a few considerations to keep in mind. These adjustments are necessary to enable users to select products for further processing in subsequent steps of the workflow.
To facilitate the selection of a product from the list, you must first add an [Action](../../../actions/actions) to the [User task node](../../../node/user-task-node) associated with this UI element:
![Node Action that saves the selected product to the process's data.](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/col_prot_action.png)
This **save-item** action is categorized as **manual** since it is user-triggered and **mandatory** because product selection is a prerequisite for proceeding to the next [Node](../../../node/) in the process. Additionally, it is marked as **Repeatable** to allow users to change their selected product.
Pay attention to the **Data to send** section, which specifies where in the **process data** the selected product (the one the user clicked on) should be saved. In this example, it is saved under the `selectedProduct` key.
Now that you have a [node action](../../../actions/actions) defined, you can proceed to add a UI action to the collection prototype. This UI action can be added directly to the collection prototype UI element or other UI elements within it that support UI actions. For more information on UI actions, click [**here**](../../ui-actions).
![Select product element and its UI Action configuration](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/add_ui_action_col_prot.png)
**Collection Item Save Key** field has an important role in the UI Action configuration of element with the UI action. This field represents how we pass the value of the **Product** that the user has selected to the [node action](../../../actions/actions) that we created in **Step 1**, named *save-item*.
In our example, we set **Collection Item Save Key** to be `selectedProduct`.
**IMPORTANT**: The `selectedProduct` key is how we expose the data from the **Collection** to the [node action](../../../actions/actions) It is **imperative** that the name in the **Collection Item Save Key** is the same as the one used in the **Data to send** input in the node action.
### Result
Before clicking the collection prototype UI element with the attached UI action, this is how the process data appears:
![Process data before selecting a product](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/c.prototype_result.png)
After selecting a product from the list (notice the new field `selectedProduct`):
![Process data after selecting a product](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/c.prototype_result_final.png)
Please note that JavaScript expressions for hiding or disabling UI components cannot be applied within elements inside a Collection prototype. Ensure your UI logic accounts for this limitation.
# File preview
The File Preview UI element is a user interface component that enables users to preview the contents of files quickly and easily without fully opening them. It can save time and enhance productivity, providing a glimpse of what's inside a file without having to launch it entirely.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/doc_preview.gif)
File preview UI elements offer various benefits such as conveying information, improving the aesthetic appeal of an interface, providing visual cues and feedback or presenting complex data or concepts in a more accessible way.
## Configuring a File preview element
A File Preview element can be configured for both mobile and web applications.
### File preview properties (web)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/file_preview_info.png)
The settings for the File Preview element include the following properties:
* **Title**: Specifies the title of the element. If the file is downloaded or shared, the file name should serve as the title used in the preview component.
* **Subtitle**: Allows for the inclusion of a subtitle for the element.
* **Display mode**: Depending on the selected display method, the following properties are available:
* **Inline** → **Has accordion**:
* `false`: Displays the preview inline without an expand/collapse option.
* `true` (Default View: Collapsed): Displays the preview inline with an expand/collapse option, initially collapsed.
* `true` (Default View: Expanded): Displays the preview inline with an expand/collapse option, initially expanded.
* **Modal** → view icon is enabled
* **Has accordion**: Introduces a Bootstrap accordion, facilitating the organization of content within collapsible items. It ensures that only one collapsed item is displayed at a time.
* **Source Type**:
* **Media Library**: Refers to PDF documents uploaded in the Media Library.
PDF documents uploaded to the Media Library must adhere to a maximum file size limit of 10 MB.
* **Process Data**: Identifies the key location within the process where the document is sourced, establishing the linkage between the document and process data.
* **Static**: Denotes the document's URL, serving as a fixed reference point.
It's worth noting that the inline modal view can raise accessibility issues if the file preview's height exceeds the screen height.
### File preview properties (mobile)
Both iOS and Android devices support the share button.
#### iOS
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/doc_preview_ios.gif)
#### Android
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/doc_preview_android.gif)
### File preview styling
The File Preview styling property enables you to customize the appearance of the element by adding valid CSS properties, for more details, click [here](../../ui-designer/ui-designer#styling).
When drag and drop a File Preview element in UI Designer, it comes with the following default styling properties:
#### Sizing
* **Fit W** - auto
* **Fit H** - fixed / Height - 400 px
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/doc_preview_styling.png)
## File Preview example
Below is an example of a File Preview UI element with the following properties:
* **Display mode** - Inline
* **Has accordion** - True
* **Default view** - Expanded
* **Source Type** - Static
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/doc_preview_example.gif)
# Checkbox
A checkbox form field is an interactive element in a web form that provides users with multiple selectable options. It allows users to choose one or more options from a pre-determined set by simply checking the corresponding checkboxes.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_new.png)
This type of form field can be used to gather information such as interests, preferences, or approvals, and it provides a simple and intuitive way for users to interact with a form.
## Configuring the Checkbox element
### Checkbox generic settings
The available configuration options for this form element are:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Checkbox styling**](#checkbox-styling)
#### Process data key
Process data key establishes the binding between the checkbox element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the checkbox.
* **Helpertext**: Additional information about the checkbox element, which can be optionally hidden within an infopoint.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_props.png)
#### Datasource configuration
* **Default value**: The default value of the checkbox.
* **Source Type**: The source can be Static, Enumeration, or Process Data.
* **Add option** : Define label and value pairs here.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_datasource.png)
#### Validators
The following validators can be added to a checkbox: `required` and `custom` (more details [here](../../validators)).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_validators.png)
#### Hide/disable expressions
The checkbox behavior can be defined using JavaScript expressions for hiding or disabling the element. The following properties can be configured for expressions:
* **Hide condition**: A JavaScript expression that hides the checkbox element when it evaluates to the specified result.
* **Disabled condition**: A JavaScript expression that disables the checkbox when it returns a truthy value.
These expressions can be used with any form element. See the following example for details:
It's important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the checkbox element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the type of the action to be performed.
For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Checkbox settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the checkbox label.
* **Helper**: Override helper text/info point.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/importing_pushing_overrides.gif)
### Checkbox styling
* **Type**: Set the type of the checkbox. Possible values:
* bordered
* clear
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
* **Direction**: Determines the orientation of the layout, which can be either horizontal or vertical.
* **Gap**: Specifies the size of the space between rows and columns.
* **Columns**: Indicates the number of columns in the layout.
Similar styling considerations apply to iOS as for web.
However, for mobile applications, there's an additional sizing style property specific to select elements:
* **Height** (pt - points): Determines the vertical size of the select element on the screen.
Similar styling considerations apply to Android as for web.
However, for mobile applications, there's an additional sizing style property specific to select elements:
* **Height** (dp - density-independent pixels): Determines the vertical size of the select element on the screen.
#### Checkbox style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov.png)
Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_common.png)
* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_label.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_helper.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_error.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_unselected.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_selected.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_disabled_unselected_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_disabled_selected_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_hover_unselected_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_hover_selected_state.png)
You can import or push the overrides from one platform to another without having to configure them multiple times.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_push_import.gif)
# Datepicker
The datepicker (Calendar Picker) is a lightweight component that allows end users to enter or select a date value.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/datepicker_form_field.png)
The datepicker (Calendar Picker) is a lightweight component that allows end users to enter or select a date value.
The default datepicker value is `DD.MM.YYYY`.
## Configuring the datepicker element
### Datepicker generic settings
The available configuration options for this form element are:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Datepicker styling**](#datepicker-styling)
#### Process data key
Process data key establishes the binding between the datepicker element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the datepicker element.
* **Placeholder**: Text that appears within the datepicker element when it is empty.
* **Min Date**: Set the minimum valid date selectable in the datepicker.
* **Max Date**: Set the maximum valid date selectable in the datepicker.
* **Min Date, Max Date error**: When a date is introduced by typing, define the error message to be displayed.
* **Helpertext**: Additional information about the datepicker element, which can be optionally hidden within an infopoint.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/datepicker1.png)
#### Datasource configuration
**Default Value**: The default values of the datepicker element, this will autofill the datepicker when you will run the process.
#### Validators
The following validators can be added to a datepicker: `required`, `custom`, `isSameOrBeforeToday` or `isSameOrAfterToday` (more details [here](../../validators)).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/datepicker2.png)
#### Hide/disable expressions
The datepicker behavior can be defined using JavaScript expressions for hiding or disabling the element. The following properties can be configured for expressions:
* **Hide condition**: A JavaScript expression that hides the datepicker element when it returns a truthy value.
* **Disabled condition**: A JavaScript expression that disables the datepicker element when it returns a truthy value.
It's important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the datepicker element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the action type.
For more details on how to configure a UI action, click [**here**](../../ui-actions).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/datepicker3.png)
### Datepicker settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the datepicker label.
* **Helper**: Override helper text/info point.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/importing_pushing_overrides.gif)
### Datepicker styling
#### Sizing
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
However, for mobile applications, there’s an additional sizing style property:
* **Height** (pt - points): Determines the vertical size of the datepicker element on the screen.
Similar styling considerations apply to Android as for web.
However, for mobile applications, there’s an additional sizing style property:
* **Height** (dp - density-independent pixels): Determines the vertical size of the datepicker element on the screen.
#### Datepicker style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_all.png)
Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/select_ov_com_props.png)
* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/select_ov_label.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/select_ov_helper.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/select_ov_error.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/select_ov_empty_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_select_active_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_select_filled_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_select_disabled_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ovselect_error_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_select_hover-state.png)
You can import or push the overrides from one platform to another without having to configure them multiple times.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_push_import.gif)
# Input
An input field is a form element that enables users to input data with validations and can be hidden or disabled.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/input_element.png)
## Configuring the Input element
### Input generic settings
These settings added in the Generic tab are available and they apply to all platforms including Web, iOS, and Android:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource-configuration)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Input styling**](#input-styling)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/input_props.png)
#### Process data key
Process data key establishes the binding between the input element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the input field.
* **Placeholder**: Text that appears within the input field when it is empty.
* **Type**: Defines the type of data the input field can accept, such as text, number, email, or password.
* **Prefix**: Label appearing at the start of the input field.
* **Suffix**: Label appearing at the end of the input field.
* **Has Clear**: Option to include a content clear mechanism.
* **Helpertext**: Additional information about the input field, which can be optionally hidden within an infopoint.
* **Update on Blur**: Update behavior triggered when the input field loses focus.
#### Datasource configuration
The default value for the element can be configured here, this will autofill the input field when you will run the process.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/input_datasource1_new.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/input_datasource2.png)
#### Computed datasource value
To add a computed value, you have to explicitly check “Computed value” option (represented by the `f(x)` icon), which will transform the desired field into a JavaScript editor.
Check the following example:
#### Validators
Incorporating validators into your inputs adds an extra layer of assurance to your data. (For further insights, refer to [this link](../../validators)).
Witness the essential role of validators in action with this required validator example:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/Input_validat.gif)
#### Hide/disable expressions
Define the input field's behavior using JavaScript expressions to control its visibility or disablement. Configure the following properties for expressions:
* **Hide condition**: A JavaScript expression that hides the input field when it evaluates to the specified result.
* **Disabled condition**: A JavaScript expression that disables the input field when it returns a truthy value.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/input_hide.png)
In the example above, we used a rule to hide an input field if another one has a null value (it is not filled). The "Mortgage" input field, which remains hidden until users fill in the "Income" field.
#### Hide expression example
We will use the key defined on the "Income" input field to create the JavaScript hide condition to hide the "Mortgage" input field:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/input.income.png)
* Rule used:
```javascript
${application.input.income} === null || ${application.input.income} === ""
```
* Result:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/input_hide_result.gif)
#### Disable example
For example, you can use a disabled condition to disable an input element based on what values you have on a second input.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/input_disabled_condition.png)
When you type 'TEST' in the first input (Name) the second input (Test) will be disabled:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/disabled_input.gif)
* Rule used:
```javascript
${application.input.name} == 'TEST'
```
It's important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the Input Field to define its behavior and interactions.
* **Event**: Possible value -`CHANGE`.
* **Action Type**: Select the type of the action to be performed.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/input_ui_actions_new.gif)
### Input settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the input label.
* **Helper**: Override helper text/info point.
* **Placeholder**: Override the placeholder.
* **Prefix**: Override the prefix.
* **Suffix**: Override the suffix.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/importing_pushing_overrides.gif)
For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Input styling
* **Left Icon**: You can include an icon on the left side of the Input element. This icon can serve as a visual cue or symbol associated with the input field's purpose or content.
* **Right Icon**: Same as left icon.
#### Icons properties
* **Icon Key**: The key associated in the Media library, select the icon from the **Media Library**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/input_icons.png)
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
However, for mobile applications, there's an additional sizing style property specific to input elements:
* **Height** (pt - points): Determines the vertical size of the input box on the screen.
Similar styling considerations apply to Android as for web.
However, for mobile applications, there's an additional sizing style property specific to input elements:
* **Height** (dp - density-independent pixel): Determines the vertical size of the input box on the screen.
#### Input style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_all.png)
Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_common_props.png)
* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_label.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_helper.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_error.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_empty_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_active_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_filled_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_disabled_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_error_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_hover_state.png)
You can import or push the overrides from one platform to another without having to configure them multiple times.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_push_import.gif)
# Radio
Radio buttons are normally presented in radio groups (a collection of radio buttons describing a set of related options). Only one radio button in a group can be selected at the same time.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/radio_form_field.png)
## Configuring the radio field element
### Radio generic settings
These allow you to customize the generic settings for the Radio element:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Radio styling**](#radio-styling)
#### Process data key
Process data key establishes the binding between the radio element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/radio_process_key.png)
#### Properties
* **Label**: The visible label for the radio element.
* **Helpertext**: Additional information about the radio element, which can be optionally hidden within an infopoint.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/radio_label_info.png)
#### Datasource configuration
* **Default value**: Autoselect an option from the radio element based on this value. You need to specify the value from the value/label pairs defined in the Datasource tab.
* **Source Type**: The source can be Static, Enumeration, or Process Data.
* **Add option**: Define label and value pairs here.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/radio_datasource.png)
#### Validators
The following validators can be added to a radio: `required` and `custom` (more details [here](../../validators)).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/radio_validators.png)
#### Hide/disable expressions
The radio's element behavior can be defined using JavaScript expressions for hiding or disabling the element. The following properties can be configured for expressions:
* **Hide condition**: A JavaScript expression that hides the Radio element when it returns a truthy value.
* **Disabled condition**: A JavaScript expression that disables the Radio element when it returns a truthy value.
It's important to make sure that disabled fields have the same expression configured under the path expressions → hide.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/radio_validators.png)
#### UI actions
UI actions can be added to the radio element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the type of the action to be performed.
For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Radio settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the radio label.
* **Helper**: Override helper text/info point.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/importing_pushing_overrides.gif)
### Radio styling
* **Type**: Set the type of the radio. Possible values:
* bordered
* clear
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
* **Direction**: Determines the orientation of the layout, which can be either horizontal or vertical.
* **Gap**: Specifies the size of the space between rows and columns.
* **Columns**: Indicates the number of columns in the layout.
Similar styling considerations apply to iOS as for web.
However, for mobile applications, there's a difference on Gap value:
* **Gap**: pt - points instead of pixels
Similar styling considerations apply to Android as for web.
However, for mobile applications, there's a difference on Gap value:
* **Gap**: dp - density-independent pixels
#### Radio style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov.png)
Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_common.png)
* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_label.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_helper.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_error.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_unselected.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_selected.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_disabled_unselected_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_disabled_selected_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_hover_unselected_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_hover_selected_state.png)
You can import or push the overrides from one platform to another without having to configure them multiple times.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_push_import.gif)
# Segmented button
It allows users to pick only one option from a group of options, and you can choose to have between 2 and 5 options in the group. The segmented button is easy to use, and can help make your application easier for people to use.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/segmented_button1.gif)
## Configuring the segmented button
### Segmented button generic settings
The available configuration options for this form element are:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Segmented button styling**](#segmented-button-styling)
#### Process data key
Process data key establishes the binding between the segmented buton and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the segmented button.
* **Helpertext**: Additional information about the segmented button, which can be optionally hidden within an infopoint.
#### Datasource configuration
* **Default Value**: The default value of the segmented button (it can be selected from one of the static source values)
* **Source Type**: It is by default Static.
* **Add option**: Define label and value pairs here.
#### Validators
The following validators can be added to a segmented button: `required` and `custom` (more details [here](../../validators)).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/segmented_button_props.png)
#### UI actions
UI actions can be added to the segmented button element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the action type.
For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Segmented button settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the segmented button label.
* **Helper**: Override helper text/info point.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/importing_pushing_overrides.gif)
#### Sizing
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
Similar styling considerations apply to Android as for web.
#### Segmented button style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov.png)
Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_common.png)
* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_label.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_helper.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_error.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_unselected.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_selected.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_disabled_unselected_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_disabled_selected_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_hover_unselected_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Icon color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_hover_selected_state.png)
You can import or push the overrides from one platform to another without having to configure them multiple times.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_push_import.gif)
# Select
The Select form field is an element that allows users to choose from a list of predefined options. Each option consists of a label, which is displayed in the dropdown menu, and a corresponding value, which is stored upon selection.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/select_form_field.png)
For example, consider a scenario where you have a label "Sports" with the value "S" and "Music" with the value "M". When a user selects "Sports" in the process instance, the value "S" will be stored for the "Select" key.
## Configuring the Select element
### Select generic settings
These allow you to customize the generic settings for the Select element:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource-configuration)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Select styling**](#select-styling)
#### Process data key
Process data key establishes the binding between the select element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the select element.
* **Placeholder**: Text that appears within the select element when it is empty.
* **Empty message**: Text displayed for custom type when no results are found.
* **Search for options**: Displays a search to filter options.
* **Has Clear**: Option to include a content clear mechanism.
* **Helpertext**: Additional information about the select element, which can be optionally hidden within an infopoint.
#### Datasource configuration
* **Default value**: Autofill the select field with this value. You need to specify the value from the value/label pairs defined in the Datasource tab.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/default_value_select.gif)
* **Source Type**: The source can be Static, Enumeration, or Process Data.
* **Add option** : Define label and value pairs here.
#### Validators
You can add multiple validators to a select field. For more details, refer to [**Validators**](../../validators).
#### Hide/disable expressions
The select field's behavior can be defined using JavaScript expressions for hiding or disabling the element. The following properties can be configured for expressions:
* **Hide condition**: A JavaScript expression that hides the Select field when it returns a truthy value.
* **Disabled condition**: A JavaScript expression that disables the Select field when it returns a truthy value.
It's important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the select element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the type of the action to be performed.
For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Select settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the select label.
* **Helper**: Override helper text/info point.
* **Placeholder**: Override the placeholder.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/importing_pushing_overrides.gif)
### Select styling
* **Left Icon**: You can include an icon on the left side of the Select element. This icon can serve as a visual cue or symbol associated with the select element purpose or content.
#### Icons properties
* **Icon Key**: The key associated in the Media library, select the icon from the **Media Library**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/input_icons.png)
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
However, for mobile applications, there's an additional sizing style property specific to select elements:
* **Height** (pt - points): Determines the vertical size of the select element on the screen.
Similar styling considerations apply to Android as for web.
However, for mobile applications, there's an additional sizing style property specific to select elements:
* **Height** (dp - density-independent pixels): Determines the vertical size of the select element on the screen.
#### Select style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_all.png)
Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/select_ov_com_props.png)
* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/select_ov_label.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/select_ov_helper.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/select_ov_error.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/select_ov_empty_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_select_active_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_select_filled_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_select_disabled_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ovselect_error_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_select_hover-state.png)
You can import or push the overrides from one platform to another without having to configure them multiple times.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_push_import.gif)
## Example - dynamic dropdowns
As mentioned previously, you can create dropdowns including static data, enumerations, or **process data**. Let's create an example using **process data** to create a process that contains **dynamic dropdowns**.
To create this kind of process, we need the following elements:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/d_dropdowns.gif)
* a [**task node**](../../../node/task-node) (this will be used to set which data will be displayed on the dropdowns - by adding a business rule on the node)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/d_dropdowns1.gif)
* a [**user task node**](../../../node/user-task-node) (here we have the client forms and here we add the SELECT elements)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/d_dropwdowns2.gif)
### Creating the process
Follow the next steps to create the process from scratch:
1. Open **FlowX Designer** and from the **Processes** tab select **Definitions**.
2. Click on the breadcrumbs (top-right corner) then click **New process** (the Process Designer will now open).
3. Now add all the **necessary nodes** (as mentioned above).
### Configuring the nodes
1. On the **task node**, add a new **Action** (this will set the data for the dropdowns) with the following properties:
* Action type - **Business Rule**
* **Automatic**
* **Mandatory**
* **Language** (we used an [**MVEL**](../../../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-mvel) script to create a list of objects)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/d_business_rukle.png)
Below you can find the MVEL script used in the above example:
```java
output.put("application",
{
"client": {
"identity": [
{
"value": "001",
"label": "Eddard Stark"
},
{
"value": "002",
"label": "Sansa Stark"
},
{
"value": "003",
"label": "Catelyn Stark"
}
]},
"contracts": {
"001": [
{
"value": "c001",
"label": "Eddard Contract 1"
},
{
"value": "c007",
"label": "Eddard Contract 2"
}
],
"003": [
{
"value": "c002",
"label": "Catelyn Contract 1",
},
{
"value": "c003",
"label": "Catelyn Contract 2 ",
},
{
"value": "c004",
"label": "Catelyn Contract 3"
}
],
"002": [
{
"value": "c005",
"label": "Sansa Contract 1",
}
]
}
});
```
2. On the **user task node**, add a new **Action** (submit action, this will validate the forms and save the date) with the following properties:
* **Action type** - Save Data
* **Manual**
* **Mandatory**
* **Data to send** (the key on which we added client details and contracts as objects in the business rule) - `application`
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/d_action_ut.png)
### Configuring the UI
Follow the next steps to configure the UI needed:
1. Select the **user task node** and click the **brush icon** to open [**UI Designer**](../../ui-designer).
2. Add a [**Card**](../root-components/card) UI element as a [**root component**](../root-components/) (this will group the other elements inside it) with the following properties:
* Generic:
* **Custom UI payload** - `{"application": ${application}}` - so the frontend will know which data to display dynamically when selecting values from the **SELECT** element
* **Title** - *Customer Contract*
3. Inside the **card**, add a [**form element**](./).
4. Inside the **form** add two **select elements**, first will represent, for example, the *Customer Name* and the second the *Contract ID.*
5. For first select element (Customer Name) set the following properties:
* **Process data key** - `application.client.selectedClient`
* **Label** - Customer Name
* **Placeholder** - Customer Name
* **Source type** - Process Data (to extract the data added in the **task node**)
* **Name** - `application.client.identity`
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/select_customer.png)
6. For the second select element (Contract ID) set the following properties:
* **Process data key** - `application.client.selectedContract`
* **Label** - Contract ID
* **Placeholder** - Contract ID
* **Source Type** - Process Data
* **Name** - `application.contracts`
* **Parent** - `SELECT` (choose from the dropdown list)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/select_contract.png)
7. Add a button under the form that contains the select elements with the following properties:
* **Label** - Submit
* **Add UI action** - add the submit action attached earlier to the user task node
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/submit_b.png)
8. Test and run the process by clicking **Start process**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/dynamic_ex.gif)
# Slider
It allows users to pick only one option from a group of options, and you can choose to have between 2 and 5 options in the group. The segmented button is easy to use, and can help make your application easier for people to use.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/slider.gif)
## Configuring the slider element
### Slider generic settings
The available configuration options for this form element are:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Slider styling**](#slider-styling)
#### Process data key
Process data key establishes the binding between the slider element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the slider element.
* **Show value label**: A toggle option that determines whether the current selected value of the slider is displayed as a label alongside the slider handle.
* **Helpertext**: Additional information about the slider element, which can be optionally hidden within an infopoint.
* **Min Value** : The minimum value or starting point of the slider's range, it defines the lowest selectable value on the slider.
* **Max Value**: The maximum value or end point of the slider's range, it sets the highest selectable value on the slider.
* **Suffix**: An optional text or symbol that can be added after the displayed value on the slider handle or value label, it is commonly used to provide context or units of measurement.
* **Step size**: The increment or granularity by which the slider value changes when moved, it defines the specific intervals or steps at which the slider can be adjusted, allowing users to make more precise or discrete value selections.
#### Datasource configuration
**Default Value**: The default value of the slider (static value - integer) the initial value set on the slider when it is first displayed or loaded, it represents a static value (integer), that serves as the starting point or pre-selected value for the slider, users can choose to keep the default value or adjust it as desired.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/slider_general.png)
#### Validators
The following validators can be added to a slider: `required` and `custom` (more details [here](../../validators)).
#### Hide/disable expressions
* **Hide condition**: A JavaScript expression that hides the sloder element when it returns a truthy value.
* **Disabled condition**: A JavaScript expression that disables the slider element when it returns a truthy value.
It’s important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the slider element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the action type, ❗️for more details on how to configure a UI action, click [**here**](../../ui-actions).
### Multiple sliders
You can also use multiple sliders UI elements that are interdependent, as you can see in the following example:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/multiple_sliders.gif)
You can improve the configuration of the slider using computed values as in the example above. These values provide a more flexible and powerful approach for handling complex use cases. You can find an example by referring to the following documentation:
[**Dynamic & computed values**](../../dynamic-and-computed-values#computed-values)
### Slider settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the slider label.
* **Helpertext**: Override helper text/info point.
* **Show value**: Override the show value option.
* **Suffix**: Override the suffix.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/importing_pushing_overrides.gif)
### Slider styling
#### Sizing
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
Similar styling considerations apply to Android as for web.
#### Slider style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov.png)
Style options:
* Limits font **\[FONT]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/select_ov_com_props.png)
* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_label.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_helper.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_error.png)
* Empty **\[COLOR]**
* Filled **\[COLOR]**
* Knob color **\[COLOR]**
* Limits **\[COLOR]**
* Value **\[COLOR]**
On iOS, overrides for the **Knob** are not available.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_unselected.png)
* Empty **\[COLOR]**
* Filled **\[COLOR]**
* Knob color **\[COLOR]**
* Limits **\[COLOR]**
* Value **\[COLOR]**
On iOS, overrides for the **Filled** and **Empty** states, as well as the **Knob**, are not available.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_selected.png)
You can import or push the overrides from one platform to another without having to configure them multiple times.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_push_import.gif)
# Switch
A switch, a toggle switch, is another form element that can be utilized to create an intuitive user interface. The switch allows users to select a response by toggling it between two states. Based on the selection made by the user, the corresponding Boolean value of either true or false will be recorded and stored in the process instance values for future reference.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/switch_form_field.gif)
## Configuring the switch element
### Switch generic settings
The available configuration options for this form element are:
* [**Process data key**](#process-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Switch styling**](#switch-styling)
#### Process data key
Process data key establishes the binding between the switch element and process data, enabling its later use in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration).
#### Properties
* **Label**: The visible label for the switch element.
The Label field supports Markdown syntax, enabling you to customize the text appearance with ease. To explore the Markdown syntax and its various formatting options, click [**here**](https://www.markdownguide.org/cheat-sheet/).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/label_attributed.png)
* **Helpertext**: Additional information about the switch element, which can be optionally hidden within an infopoint.
#### Datasource configuration
**Default Value**: The default value of the switch element (it can be switched on or switched off). The default value is switched on.
#### Validators
The following validators can be added to a switch element: `requiredTrue` and `custom` (more details [here](../../validators)).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/switch_details.png)
#### Hide/disable expressions
* **Hide condition**: A JavaScript expression that hides the Switch element when it returns a truthy value.
* **Disabled condition**: A JavaScript expression that disables the Switch element when it returns a truthy value.
It’s important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the Switch element to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the type of the action to be performed.
For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Switch settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the switch label.
* **Helper**: Override helper text/info point.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/importing_pushing_overrides.gif)
### Switch styling
**Label position**: The label of the Switch can be positioned either as `start` or `end`.
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
Similar styling considerations apply to Android as for web.
#### Switch style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov.png)
Style options:
* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_label.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_helper.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_error.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Knob color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_unselected.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Knob color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_selected.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Knob color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_disabled_unselected_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Knob color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/checkbox_ov_disabled_selected_state.png)
You can import or push the overrides from one platform to another without having to configure them multiple times.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_push_import.gif)
# Text area
A text area is a form element used to capture multi-line input from users in a conversational interface. The text area component is typically used for longer inputs such as descriptions, comments, or feedback, providing users with more space to type their responses.
![Text area](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/text_area41.png)
It is an important tool for creating intuitive and effective conversational interfaces that can collect and process large amounts of user input.
## Configuring the text area element
### Text area generic settings
These settings added in the Generic tab are available and they apply to all platforms including Web, iOS, and Android:
* [**Process data key**](#prcoess-data-key)
* [**Properties**](#properties)
* [**Datasource**](#datasource-configuration)
* [**Validators**](#validators)
* [**Expressions**](#expressions)
* [**UI actions**](#ui-actions)
* [**Text area styling**](#text-area-styling)
#### Prcoess data key
Process data key creates the binding between form element and process data, so it can be later used in [decisions](../../../node/exclusive-gateway-node), [business rules](../../../actions/business-rule-action/business-rule-action) or [integrations](../../../node/message-send-received-task-node#from-integration)
#### Properties
* **Label**: The visible label for the text area element.
* **Placeholder**: Text that appears within the text area when it is empty.
* **Has Clear**: Option to include a content clear mechanism.
* **Helpertext**: Additional information about the text area field (can be hidden inside an infopoint).
* **Update on Blur**: Update behavior triggered when the text area loses focus.
#### Datasource configuration
The default value for the element can be configured here, this will autofill the text field when you will run the process.
#### Validators
You can add multiple validators to a text area field. For more details, refer to [**Validators**](../../validators).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/text_area_props.png)
#### Hide/disable expressions
The text area's behavior can be defined using JavaScript expressions for hiding or disabling the element. The following properties can be configured for expressions:
* **Hide condition**: A JavaScript expression used to hide the text area when it is evaluated to your desired result.
* **Disabled condition**: A JavaScript expression used to disable the text area when it returns a truthy value
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/hide_text_area.png)
In the example above, we used a rule to hide a text area element if the value of the switch element above is false.
#### Hide expression example
We will use the key defined on the switch element to create a JavaScript hide condition to hide the text area element:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/hide_text_area1.png)
* Rule used:
```javascript
${application.client.hasHouse} === false
```
* Result:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/text_area_result.gif)
#### Disable example
For example, you can use a disabled condition to disable a text area element based on what values you have on other elements.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/disable_text_area1.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/disable_text_area2.png)
When you choose a specific value on the radio element (Contact via SMS), the text area is disabled based on the disabled condition.
* Rule used:
```javascript
${application.client.contact} == "prfS"
```
It's important to make sure that disabled fields have the same expression configured under the path expressions → hide.
#### UI actions
UI actions can be added to the text area field to define its behavior and interactions.
* **Event**: Possible value - `CHANGE`.
* **Action Type**: Select the type of the action to be performed.
For more details on how to configure a UI action, click [**here**](../../ui-actions).
### Text area settings overrides
There are instances where you may need to tailor settings configured in the **Generic** settings tab. This proves especially beneficial when you wish to adjust these settings to appear differently across various platforms such as Web, Android, or iOS.
Available override settings:
* Properties:
* **Label**: Override the text area label.
* **Helper**: Override helper text/info point.
* **Placeholder**: Override the placeholder.
* Expressions:
* **Hide**: Override the hide expression.
Overrides can always be imported/pushed from one platform to another:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/importing_pushing_overrides.gif)
### Text area styling
#### Fit W (fit width)
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
#### H Type (height type)
* **fixed**: Maintains a fixed height (pixels)
* **auto**: Adjusts the height automatically based on the content.
#### Rows
* **Min Rows**: Sets the minimum number of rows.
* **Max Rows**: Sets the maximum number of rows.
Similar styling considerations apply to iOS as for web.
* **fixed height**: Measured in dp - density-independent pixels.
Similar styling considerations apply to Android as for web.
* **fixed height**: Measured in pt - points.
#### Text area style overrides options
Theme overrides refer to the ability to modify or customize the appearance and behavior of UI components by overriding default theme settings. This can be applied at various levels, such as specific elements or entire sections, and can be platform-specific (Web, iOS, Android).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_all.png)
Style options:
* Border radius **\[TEXT]**
* Border width **\[TEXT]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_common_props.png)
* Default state
* Text color **\[COLOR]**
* Disabled state
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_label.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
* Helper Tooltip
* Text style **\[FONT]**
* Text color **\[COLOR]**
* Background color **\[COLOR]**
* Icon Color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_helper.png)
* Text color **\[COLOR]**
* Text style **\[FONT]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_error.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_empty_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_active_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_filled_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_disabled_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_error_state.png)
* Border color **\[COLOR]**
* Background color **\[COLOR]**
* Text color **\[COLOR]**
* Right icon color **\[COLOR]**
* Left icon color **\[COLOR]**
* Prefix/Suffix color **\[COLOR]**
* Placeholder color **\[COLOR]**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/input_ov_hover_state.png)
You can import or push the overrides from one platform to another without having to configure them multiple times.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/ov_push_import.gif)
# Image
Image UI elements are graphical components of a user interface that display a static or dynamic visual representation of an object, concept, or content.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/image_general.png)
These elements can be added to your interface using the UI Designer tool, and they are often used to convey information, enhance the aesthetic appeal of an interface, provide visual cues and feedback, support branding and marketing efforts, or present complex data or concepts in a more intuitive and accessible way.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/image_generic.png)
## Configuring an image
Configuring an image in the UI Designer involves specifying various settings and properties. Here are the key aspects of configuring an image:
### Image settings
The image settings consist of the following properties:
* **Source location** - the location from where the image is loaded:
* [**Media Library**](#media-library)
* [**Process Data**](#process-data)
* [**External**](#external)
Depending on which **Source location** is selected, different configurations are available:
### Media library
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/image_media_library1.png)
* **Image key** - the key of the image from the media library
* **Select from media library** - search for an item by key and select it from the media library
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/search_item_by_key.png)
* **Upload to media library** - add a new item (upload an image on the spot)
* **upload item** - supported formats: PNG, JPG, GIF, SVG, WebP; ❗️(maximum size - 1 MB)
* **key** - the key must be unique and cannot be changed afterwards
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/upload_to_media_lib.png)
### Process data
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/process_data.gif)
* Identify the **Source Type**. It can be either a **URL** or a **Base 64 string**.
* Locate the data using the **Process Data Key**.
* If using a URL, provide a **Placeholder URL** for public access. This is the URL where the image placeholder is available.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/process_data_img.png)
### External
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/image_external.png)
* **Source Type**: it can be either a **URL** or a **Base 64 string**
* **Image source**: the valid URL of the image.
* **Placeholder URL**: the public URL where the image placeholder is available
## UI actions
The UI actions property allows you to add a UI Action, which must be configured on the same node. For more details on UI Actions, refer to the documentation [here](../ui-actions).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/image_ui_actions.png#center)
## Image styling
The image styling property allows you to add or to specify valid CSS properties for the image. For more details on CSS properties, click [here](../../ui-designer/ui-designer#styling).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/image_styling.png)
# Indicators
The indicators (Message UI elements) allow you to display different types of messages.
Messages can be categorized into the following types:
* **Info**: Used to convey general information to users.
* **Warning**: Indicates potential issues or important notices.
* **Error**: Highlights errors or critical issues.
* **Success**: Communicates successful operations or completion.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/indicators_gen.png)
## Properties
When configuring a message, you have the following properties:
* **Message**: The content of the message body, this property supports markdown attributes such as: bold, italic, bold italic, strikethrough and URLs, allowing you to format the content of the message.
* **Type**: as mentioned above, there are multiple indicators: info, warning, error, success
* **Expressions**: you can define expressions to control when the message should be hidden. This can be useful for dynamically showing or hiding messages based on specific conditions.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/indicators_prop.png)
Info example with markdown:
```markdown
If you are encountering any difficulties, please [contact our support team](mailto:support@flowx.ai).
```
When executed, will look like this:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/indicators1.png)
## Types and Usage
Here's how you can use the Message UI element in your UI design:
### Info
If you are encountering any difficulties, please [contact our support team](mailto:support@flowx.ai).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/info_indicator.png)
### Error
An error occurred while processing your request. Please try again later.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/indicator_error.png)
### Success
Your payment was successfully processed. Thank you for using our services!
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/indicator_success.png)
## Indicators styling
To create an indicator with specific styling, sizing, typography, and color settings, you can use the following configuration:
### Style
The Style section allows you to customize the appearance of your indicator UI element. You can apply the following style to achieve the desired visual effect:
* **Text**: Displays only the icon and the text.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/indicator_text.png)
* **Border**: Displays the icon, the text and the border.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/indicator_border.png)
* **Fill**: It will fill the UI element's area.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/indicator_fill.png)
For more valid CSS properties, click [**here**](../../ui-designer/ui-designer#styling).
# Card
A card in FlowX.AI is a graphical component designed for the purpose of grouping and aligning various elements. It offers added functionality by incorporating an accordion feature, allowing users to expand and collapse content as needed.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/card_ex.gif)
The following properties that can be configured:
## Properties and settings
### Settings (applicable across all platforms)
These settings added in the **Generic** tab are available and they apply to all platforms including Web, iOS, and Android.
#### When used as root
When the Card is utilized as the root component, it offers the following settings:
* **Custom UI Payload**: A valid JSON describing the custom data transmitted to the frontend when the process reaches a specific user task.
* **Title**: The title displayed on the card.
* **Subtitle**: Additional descriptive text accompanying the card.
* **Has accordion**: Introduces a Bootstrap accordion, facilitating the organization of content within collapsible items. It ensures that only one collapsed item is displayed at a time.
The accordion feature is not available for mobile configuration.
#### Mobile configuration (iOS & Android)
For mobile configuration (iOS and Android), you can also configure the following property (not available on Web configuration):
* **Screen title**: Set the screen title used in the navigation bar on mobile devices (available only when the card element is set as the root).
![Android Screen Title](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/screen_title_android%20%282%29.png)
### Card settings overrides
You may want to override the card title or subtitle set as **Generic** to be displayed differently on mobile devices. For example, on the web, titles might be shorter.
Available properties overrides for web (overriding properties set in **Generic** settings tab):
* Title
* Subtitle
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/overrides_web.png)
Available properties overrides for Android (overriding properties set in **Generic** settings tab):
* Title
* Subtitle
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/overrides_android.png)
Available properties overrides for iOS (overriding properties set in **Generic** settings tab):
* Title
* Subtitle
#### When not used as root
When the card is not the root, you can configure: **Title**, **Subtitle**, **Card Style** and **Has Accordion**.
Leverage cards in your designs to organize and present content, enhancing the overall user experience.
## Styling
When designing for the web, consider the layout options available for the card. These options include:
* **Direction**: Choose between **Horizontal** or **Vertical** alignment to define the flow of components. For example, select Horizontal for a left-to-right layout.
* **Justify (H)**: Specify how content is aligned along the main axis. For instance, select end to align items to the end of the card.
* **Align (V)**: Align components vertically within their card using options such as top, center, or bottom alignment.
* **Wrap**: Enable wrapping to automatically move items to the next line when they reach the end of the card. Useful for creating multi-line layouts.
* **Gap**: Define the space between components to control the distance between each item. Adjusting the gap enhances visual clarity and organization.
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
However, for mobile applications, there's an additional layout style property specific to cards when used as the root component:
* **Scrollable**: This property allows you to define the desired behavior of the screen, specifying whether it should be scrollable or not. By default, this property is set to true, enabling scrolling functionality.
Similar styling considerations apply to Android as for web.
However, for mobile applications, there's an additional layout style property specific to cards when used as the root component:
* **Scrollable**: This property allows you to define the desired behavior of the screen, specifying whether it should be scrollable or not. By default, this property is set to true, enabling scrolling functionality.
### Theme overrides
Customize the appearance by overriding style options coming from your default theme. Available overrides:
* Border width
* Border radius
* Border color
* Background color
* Shadow
* Title
* Title Color
* Subtitle
* Subtitle Color
## Validating elements
To ensure the validation of all form elements within a card upon executing a Save Data action such as "Submit" or "Continue," follow these steps:
1. When adding a UI action to a button inside a card, locate the dropdown menu labeled **Add form to validate**.
2. From the dropdown menu, select the specific form or individual form elements that you wish to validate.
3. By choosing the appropriate form or elements from this dropdown, you can ensure comprehensive validation of your form data, enhancing the integrity and reliability of your user interactions.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/forms_to_validate.png)
***
Use our [**feedback form**](https://www.cognitoforms.com/FlowXAi1/FeedbackForm) if you would like to provide feedback on this page. You could also [**raise issues/requests**](https://flowxai.canny.io/documentation-feedback).
# Container
A container in Flowx is a versatile building block that empowers you to group components and arrange them as needed, providing flexibility in UI design. It can also serve as the root component for your design.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/container_ex.gif)
The following properties can be configured in the container:
## Properties and settings
### Settings (applicable across all platforms)
These settings added in the **Generic** tab are available and they apply to all platforms including Web, iOS, and Android.
#### When used as root
When employed as the root component, the container offers the following settings:
* **Custom UI Payload**: A valid JSON describing the data sent to the frontend when the process reaches a specific user task.
* **Expressions (Hide condition)**: JavaScript expressions utilized to dynamically hide components based on conditions.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/container_root_new.png)
#### When not used as root
When the container is not used as the root, you can configure only the **Hide Condition** property.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/container_not_root.png)
By leveraging containers, you gain the ability to structure your UI elements efficiently, enhancing the overall design and usability of your application.
### Container settings overrides
You may want to override settings configured in the **Generic** tab to be displayed differently on mobile devices.
* **Hide expressions**: Use Overrides in the Settings tab to hide a container on a specific platform.
For instance, you can set a container to appear on all platforms, or create an override to hide it on mobile but show it on web.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/override_hide.gif)
To achieve this:
1. Select a Container element in the UI Designer, then navigate to Settings -> your desired platform -> Overrides (+) -> Expressions -> Hide.
2. Add your JavaScript Hide condition.
## Styling
When designing for the web, consider the layout options available for the container. These options include:
* **Position**
* **Static**: This style remains fixed and does not scroll along with the page content.
* **Sticky**: When the sticky property is enabled, the container maintains its position even during scrolling.
* **Sticky layout**: You have the option to specify minimum distances between the container and its parent element while scrolling. At runtime, sticky containers will keep their position on scroll relative to top/ bottom/ right/ left margin of the parent element.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/Screenshot%202023-12-13%20at%2017.18.39.png)
* **Direction**: Choose between **Horizontal** or **Vertical** alignment to define the flow of components. For example, select Horizontal for a left-to-right layout.
* **Justify (H)**: Specify how content is aligned along the main axis. For instance, select end to align items to the end of the container.
* **Align (V)**: Align components vertically within their container using options such as top, center, or bottom alignment.
* **Wrap**: Enable wrapping to automatically move items to the next line when they reach the end of the container. Useful for creating multi-line layouts.
* **Gap**: Define the space between components to control the distance between each item. Adjusting the gap enhances visual clarity and organization.
Adjusting the size of components is crucial for a responsive design. Fit W (width) offers three options:
* **fill**: Fills the available space.
* **fixed**: Maintains a fixed width.
* **auto**: Adjusts the width automatically based on content.
Similar styling considerations apply to iOS as for web.
However, there are exceptions, particularly with **Sticky layout**:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/sticky_mobile.png)
In mobile configurations, the right and left properties for **Sticky layout** are ignored by the iOS renderer.
Similar styling considerations apply to Android as for web.
However, there are exceptions, particularly with **Sticky layout**:
![Sticky layout on Android](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/sticky_mobile.png)
In mobile configurations, the right and left properties for **Sticky layout** are ignored by the Android renderer.
### Theme overrides
Customize the appearance by overriding style options coming from your default theme. Available overrides:
* Border width
* Border radius
* Border color
* Background color
* Shadow
More layout demos available below:
For more information about styling and layout configuration, check the following section:
***
Use our [**feedback form**](https://www.cognitoforms.com/FlowXAi1/FeedbackForm) if you would like to provide feedback on this page. You could also [**raise issues/requests**](https://flowxai.canny.io/documentation-feedback).
# Custom component
Custom components are developed in the web application and referenced here by component identifier. This will dictate where the component is displayed in the component hierarchy and what actions are available for the component.
Starting with **3.4.7** platform version, for User Tasks containing UI Elements, when the page is rendered, the Backend (BE) should, by default, send to the Frontend (FE) all available data as process variables with matching keys.
If the User Task also includes a **custom component**, the BE should send, in addition to default keys, objects mentioned in the "Message" option of the root element.
To add a custom component in the template config tree, we need to know its unique identifier and the data it should receive from the process model.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/ui_designer_custom.png)
The properties that can be configured are as follows:
* **Identifier** - This enables the custom component to be displayed within the component hierarchy and determines the actions available for the component.
* **Input keys** - These are used to specify the pathway to the process data that components will utilize to receive their information.
* [**UI Actions**](../../ui-actions) - actions defined here will be made available to the custom component
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/ui_designer_custom_settings.png#center)
## Prerequisites (before creation)
* **Angular Knowledge**: You should have a good understanding of Angular, as custom components are created and imported using Angular.
* **Angular CLI**: Ensure that you have Angular CLI installed.
* **Development Environment**: Set up a development environment for Angular development, including Node.js and npm (Node Package Manager).
* **Component Identifier**: You need a unique identifier for your custom component. This identifier is used for referencing the component within the application.
## Creating a custom component (Web)
To create a Custom Component in Angular, follow these steps:
1. Create a new Angular component using the Angular CLI or manually.
2. Implement the necessary HTML structure, TypeScript logic, and SCSS styling to define the appearance and behavior of your custom component.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/loader_comp.png)
## Importing the component
After creating the Custom Component, you need to import it into your application.
In your `app.module.ts` file (located at src → app → app.module.ts), add the following import statement:
```ts
`import { YourComponent } from '@app/components/yourComponent.component'`
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/import_cus.gif)
## Declaration in AppModule
In the same `app.module.ts` file, declare your Custom Component within the `declarations` array in the `@NgModule` decorator:
```ts
@NgModule({
declarations: [
// ...other components
YourComponent
],
// ...other module configurations
})
```
## Declaration in FlxProcessModule
To make your Custom Component available for use in processes created in FLOWX Designer, you need to declare it in `FlxProcessModule`.
In your process.module.ts file (located at src > app > modules > process > process.module.ts), add the following import statement:
```ts
import { YourComponent } from '@app/components/yourComponent.component';
```
Then, declare your Custom Component in the `FlxProcessModule.forRoot` function:
```ts
FlxProcessModule.forRoot({
components: {
// ...other components
yourComponent: YourComponent
},
// ...other module configurations
})
```
## Using the custom component
Once your Custom Component is declared, you can use it for configuration within your application.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/loader_component.gif)
## Data input and actions
The Custom Component accepts input data from processes and can also include actions extracted from a process. These inputs and actions allow you to configure and interact with the component dynamically.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/cst_input_data.png)
## Extracting data from processes
There are multiple ways to extract data from processes to use within your Custom Component. You can utilize the data provided by the process or map actions from the BPMN process to Angular actions within your component.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/cst_loader_input.png)
Make sure that the Angular actions that you declare match the names of the process actions.
## Styling with CSS
To apply CSS classes to UI elements within your Custom Component, you first need to identify the UI element identifiers within your component's HTML structure. Once identified, you can apply defined CSS classes to style these elements as desired.
Example:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/Screenshot%202023-10-10%20at%2012.29.51.png)
## Custom component example
Below you can see an example of a basic custom loader component built with Angular:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/2023-10-10%2012.01.58.gif)
## Additional considerations
* **Naming Conventions**: Be consistent with naming conventions for components, identifiers, and actions. Ensure that Angular actions match the names of process actions as mentioned in the documentation.
* **Component Hierarchy**: Understand how the component fits into the overall component hierarchy of your application. This will help determine where the component is displayed and what actions are available for it.
* **Documentation and Testing**: Document your custom component thoroughly for future reference. Additionally, testing is crucial to ensure that the component behaves as expected in various scenarios.
* **Security**: If your custom component interacts with sensitive data or performs critical actions, consider security measures to protect the application from potential vulnerabilities.
* **Integration with FLOWX Designer**: Ensure that your custom component integrates seamlessly with FLOWX Designer, as it is part of the application's process modeling capabilities.
## Creating a custom component (iOS)
Enhance your skills with our academy course! Learn how to develop and integrate a custom iOS component with FlowX.AI:
# Root Components in UI Design
Root components serve as the foundation for structuring user interfaces, providing the framework for arranging and configuring different types of components.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/root_components_new.gif)
Root components play a crucial role in defining the layout and hierarchy of elements within an application. Here's an overview of key root components and their functionalities:
### Container
The Container component is a versatile element used to group and configure the layout for multiple components of any type. It provides a flexible structure for organizing content within a UI.
Learn more about [Container components](./container).
### Custom
Custom components are Angular components developed within the container application and dynamically passed to the SDK at runtime. These components are identified by their unique names and enable developers to extend the functionality of the UI.
Explore [Custom components](./custom) for advanced customization options.
### Card
The Card component functions similarly to a Container component but also offers the capability to function as an accordion, providing additional flexibility in UI design.
Discover more about [Card components](./card).
A card or a container can hold a hierarchical component structure as this example:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/root_comp_str.png)
Available children for **Card** and **Container** are:
1. [**Form**](../form-elements/) - Used to group and align form field elements (inputs, radios, checkboxes, etc.).
For more information about the form elements, please refer to the [**Form elements**](../form-elements/) section.
2. [**Image**](../image) - Allows you to configure an image in the document.
3. **Text** - A simple text can be configured via this component; basic configuration is available.
4. **Link** - Used to configure a hyperlink that opens in a new tab.
5. [**Button**](../buttons) - Multiple options are available for configuration, with the most important part being the possibility to add actions.
6. [**File Upload**](../buttons) - A specific type of button that allows you to select a file.
7. [**Custom**](./custom) - Custom components.
8. [**Indicators**](../indicators) - Message UI elements to display different types of messages.
Learn more:
# Table
The Table component is a versatile UI element allowing structured data display with customizable columns, pagination, filtering, and styling options.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/table_rn.png)
**Web-only UI Component:** The Table component is designed specifically for web applications and includes theming options for consistent design across the platform.
## Configuring the Table element
The Table component closely mirrors some of the functionalities of the [**Collection**](./collection/collection) component. When configuring a Table, you define the number of columns, which automatically generates two prototypes: `th` (table header) and `tr` (table row).
* **`th`** - Used for defining column headers.
* **`tr`** - Repeated for each row based on the source data.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/table_config.png)
The table example above is generated based on dynamic data. We used a JavaScript business rule to prepopulate the table with an array of objects, similar as in collection configuration.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/dynamic_table.png)
```js
users = [
{
"firstName": "John",
"lastName": "Doe",
"loanAmount": {
"amount": 1000.00,
"code": "USD"
},
"birthDate": "1985-01-01T00:00:00Z",
"email": "john.doe@example.com"
},
{
"firstName": "John",
"lastName": "Does",
"loanAmount": {
"amount": 2000.00,
"code": "USD"
},
"birthDate": "1985-02-01T00:00:00Z",
"email": "john.does@example.com"
},
{
"firstName": "Jane",
"lastName": "Doe",
"loanAmount": {
"amount": 3000.00,
"code": "USD"
},
"birthDate": "1985-03-01T00:00:00Z",
"email": "jane.doe@example.com"
},
{
"firstName": "Jane",
"lastName": "Does",
"loanAmount": {
"amount": 4000.00,
"code": "USD"
},
"birthDate": "1985-04-01T00:00:00Z",
"email": "jane.does@example.com"
},
{
"firstName": "Jim",
"lastName": "Doe",
"loanAmount": {
"amount": 5000.00,
"code": "USD"
},
"birthDate": "1985-05-01T00:00:00Z",
"email": "jim.doe@example.com"
},
{
"firstName": "Jim",
"lastName": "Does",
"loanAmount": {
"amount": 6000.00,
"code": "USD"
},
"birthDate": "1985-06-01T00:00:00Z",
"email": "jim.does@example.com"
},
{
"firstName": "Jake",
"lastName": "Doe",
"loanAmount": {
"amount": 7000.00,
"code": "USD"
},
"birthDate": "1985-07-01T00:00:00Z",
"email": "jake.doe@example.com"
},
{
"firstName": "Jake",
"lastName": "Does",
"loanAmount": {
"amount": 8000.00,
"code": "USD"
},
"birthDate": "1985-08-01T00:00:00Z",
"email": "jake.does@example.com"
},
{
"firstName": "Jill",
"lastName": "Doe",
"loanAmount": {
"amount": 9000.00,
"code": "USD"
},
"birthDate": "1985-09-01T00:00:00Z",
"email": "jill.doe@example.com"
},
{
"firstName": "Jill",
"lastName": "Does",
"loanAmount": {
"amount": 10000.00,
"code": "USD"
},
"birthDate": "1985-10-01T00:00:00Z",
"email": "jill.does@example.com"
},
{
"firstName": "Joe",
"lastName": "Doe",
"loanAmount": {
"amount": 11000.00,
"code": "USD"
},
"birthDate": "1985-11-01T00:00:00Z",
"email": "joe.doe@example.com"
},
{
"firstName": "Joe",
"lastName": "Does",
"loanAmount": {
"amount": 12000.00,
"code": "USD"
},
"birthDate": "1985-12-01T00:00:00Z",
"email": "joe.does@example.com"
}
];
application = {
"users": users
};
output.put("application", application);
```
## Table Elements
* **Table Header**: The top row containing column labels.
* **Rows**: Each row represents a single data entry.
* **Cells**: Individual cells hold data points within each row.
* **Cell Elements**: Customizable elements within each cell for dynamic data presentation.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/table_elements.png)
When creating a table, three columns with one corresponding cells are added by default.
### Table generic settings
The following generic settings are found in the Generic tab and apply to all platforms (Web, iOS, and Android):
* [**Source key**](#source-key)
* [**Columns**](#columns)
* [**Table body**]()
* [**Expressions**](#expressions)
* [**Table Styling**](#table-styling)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/table_settings.png)
#### Source key
Similar to Collection, the Table component requires a `source` (array of objects) to populate rows dynamically.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/Screenshot%202024-10-15%20at%2019.22.32.png)
#### Columns
Customize the columns displayed in the table, including adding, deleting, or renaming columns.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/table_columns.gif)
#### Table Body
* **Pagination**: Control how data is displayed by configuring pagination or enabling scrolling.
* **Page Size**: Set the maximum number of entries displayed per page.
* **Scrollable**: Disable pagination to enable continuous scrolling through data.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/table_pagination.gif)
#### Hide condition
Using the data key from a related element, apply JavaScript expressions to control the Table’s visibility based on dependencies.
**Demonstration**:
## Table styling
### Sizing
* **Fit Width**: Expands to fill available width (default).
* **Fit Height**: Automatically adjusts height to content (default).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/table_sizing.png)
### Cell styling
**Layout Options:**
* **Direction**: Horizontal (default).
* **Justify**: Space-around (evenly spaces elements within each cell).
* **Align**: Start (left-aligned).
* **Wrap**: Enables text wrapping.
* **Gap**: 8px spacing between cell elements.
**Column Style Options:**
* **Width Fit Options**:
* **Fill**: Fills available container space.
* **Fixed**: Keeps a fixed column width.
* **Auto**: Adjusts column width to fit content.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/cell_styling.png)
* **User Resizable Columns**: Adjust column width by dragging the column edges in the header, enhancing customization.
![User Resizable Columns](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/2024-10-15%2019.37.13.gif)
This Table component enhances flexibility and offers a cohesive design, integrated with FlowX.AI’s theming framework for a consistent, web-optimized user experience.
## Actions in table cells
When configuring actions within a Table component, each action is bound to a unique table item key, ensuring that interactions within each cell are tracked and recorded precisely. The key allows the action to target specific rows and store results or updates effectively.
* **Table Item Save Key**: This key is essential for identifying the exact cell or row where an action should be executed and saved. It ensures that data within each cell remains distinct and correctly mapped to each table entry.
* **Custom Key for Data Saving**: You must add a custom key to save action data in a table cell, especially when handling unique interactions like inline editing or dynamic data updates within cells.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/table_item_save_key.png)
* **Supported Action Types** - Available actions include:
* Action: Initiates predefined actions within the cell.
* Start Process Inherit: Enables workflows inherited from process configurations to be triggered.
* Upload: Allows file or data uploads directly within a table cell.
* External: Used to create an action that will open a link in a new tab.
## FAQs
The `table item key` is essential for identifying specific rows and cells within a table. When actions are triggered in table cells, this key ensures that the action applies to the correct item, allowing data to be saved accurately in the intended cell or row. Without this key, actions may not track or save data correctly.
While the Table component shares structural similarities with Collection, it is tailored specifically for tabular data. Unlike Collection, it supports easy customization of columns, row pagination, and in-place editing (in future versions), streamlining the handling of tabular data.
Conditional styling is a planned feature for version 5.0.0. Once available, it will allow you to apply specific styles to cells or rows based on conditions, such as highlighting critical items or overdue entries.
No, nested tables (tables within other tables) are currently unsupported and are not planned for future updates. This limitation keeps the Table component optimized for its intended use without overcomplicating its structure.
Table cells support various actions:
* **Action**: Executes a predefined action within the cell.
* **Start Process Inherit**: Triggers workflows based on inherited process configurations.
* **Upload**: Allows direct file or data uploads within a cell.
Each of these actions requires a `table item key` to ensure data accuracy.
Pagination can be customized to control the number of entries displayed per page. Alternatively, you can enable scrollable view mode by disabling pagination, which provides a continuous, scrollable data view.
Direct in-place editing is scheduled for version 4.6.0, allowing users to edit data directly within table cells. This feature will improve efficiency for workflows requiring frequent table data updates.
Yes, the Table component requires a source in the form of an array of objects. The source allows the Table to dynamically populate cells based on the data structure, ensuring rows and columns align with your data set.
Custom actions can be configured using the UI Designer. Each action added to a cell will leverage the `table item key` to perform tasks such as saving edits, initiating workflows, or uploading files directly from the table.
Yes, the Table component supports JavaScript expressions to control visibility dynamically. By setting up expressions, you can create conditions that hide certain columns or rows when specific criteria are met.
# Typography
Typography is an important aspect of design that greatly influences how users perceive and interact with your content. In this section, we'll explore how to effectively utilize two essential UI elements, "Text" and "Link."
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/typography_gen.png)
## Text
The "Text" UI element serves as a tool dedicated solely to presenting text within your user interface. Whether it's paragraphs or descriptions, the "Text" UI element. Through manipulation of embedded CSS properties, you're afforded to edit the visual appearance and formatting of text, aligning it with your design preferences.
### Markdown compatibility
The Text UI element gives you with the flexibility of Markdown formatting. You can enhance your text using various markdown tags, including:
* **Bold**
```markdown
**Bold**
```
* *italic*
```markdown
*italic*
```
* ***bold italic***
```markdown
***bold italic***
```
* strikethrough
```markdown
~~strikethrough~~
```
* URL
```markdown
[URL](https://url.net)
```
Let's take the following mardkown text example:
```markdown
Be among the *first* to receive updates about our **exciting new products** and releases. Subscribe [here](flowx.ai/newsletter) to stay in the loop! Do not ~~miss~~ it!
```
When running the process it will be displayed like this:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/text_markdown.png)
### Text styling
The Styling section provides you with granular control over how your text is displayed, ensuring it aligns with your design vision.
#### Spacing:
Adjust the spacing around your text, setting the margin as follows: 16px 0 0 0 0 0 0 16px.
#### Sizing:
Choose "Fit W" to ensure the text fits the width of its container.
#### Typography
Define the font properties:
* **Font family**: Choose the desired font family.
* **Font weight**: Define the thickness of the font.
* **Font size**: Specify the font size in pixels (px).
* **Height**: Set the line height in pixels (px).
* **Color**: Determine the text color.
#### Align
Determine the text alignment.
## Link
Links are essential for navigation and interaction. The "Link" UI element creates clickable text that directs users to other pages or external resources. Here's how to create a link:
# UI Designer
The FlowX platform offers a variety of ready-to-use UI components that can be used to create rich web interfaces. These include common form elements like input fields, dynamic dropdown menus, checkboxes, radio and switch buttons, as well as other UI elements like image, text, anchor links, etc. The properties of each component can be customized further using the details tab, and design flexibility is achieved by adding styles or CSS classes to the pre-defined components. The UI templates are built in a hierarchical structure, with a root component at the top.
## Using the UI Designer
The FlowX platform includes an intuitive **UI Designer** for creating diverse UI templates. You can use various elements such as basic buttons, indicators, and forms, as well as predefined [collections](./ui-component-types/collection/collection) and [prototypes](./ui-component-types/collection/collection_prototype). To access the UI Designer, follow these steps:
1. Open **FlowX Designer** and select **Definitions** from the **Processes** tab.
2. Select a **process** from the process definitions list.
3. Click the **Edit** **process** button.
4. Select a **node** or a **navigation area** then click the **brush icon** to open the **UI Designer**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/ui_ui_designer.gif)
The UI designer is available for [**User task**](../node/user-task-node) nodes and **Navigation Areas** elements.
After adding a specific component to the node, the right-side menu will display more configuration options.
For more flexibility, undo or redo actions are available within the UI Designer. This includes tasks such as dragging, dropping, or deleting elements from the preview section, as well as adjusting settings within the styling and settings panel.
To undo or redo an action, users can simply click the corresponding icons in the UI Designer toolbar, or use the keyboard commands for even quicker access.
## UI components
FlowX offers a wide range of [UI components](./ui-designer#ui-components) that can be customized using the UI Designer. For example, when configuring a [card](./ui-component-types/root-components/card) element (which is a root component), the following properties can be customized:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/ui_ui_designer.gif)
### Settings tab
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/uI_designer_panel1.png)
#### Generic tab
This is where you configure the logic and assign process keys, UI actions, and other component settings that are common across all platforms (Web, iOS, Android).
#### Platform-specific settings
For example, on Android, you might want to change the Card title to a shorter one.
To override a general property like a title, follow these steps:
Access the UI Designer and select a UI Element, such as a **Card**.
From the UI Designer navigation panel, select the **Settings** tab, then select the **desired platform**
Click the "+" button (next to "Overrides") and select **Properties -> Title**, then input your desired value.
Settings overrides can always be imported/pushed from one platform to another:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/importing_pushing_overrides.gif)
Preview your changes in the UI Designer by navigating from one platform to another or by comparing them.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/props_ui_designer_override.gif)
Keep in mind that the preview generated in the UI Designer for iOS and Android platforms is an estimate meant to help you visualize how it might look on a mobile view.
#### Hide expressions
By utilizing **Overrides** in the **Settings** tab, you can selectively hide elements on a specific platform.
To achieve this:
Select a UI component in the **UI Designer**, then navigate to **Settings** -> **your desired platform** -> **Overrides (+)** -> **Expressions** -> **Hide**.
Add your JavaScript Hide condition.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/hide_condition.gif)
### Styling tab
The Styles tab functions independently for three platforms: Web, iOS, and Android. Here, you can customize styles for each UI component on each platform.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/ui_designer_panel2.png)
If you want to customize the appearance of a component to differ from the theme settings, you must apply a **Theme Override**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/theme_overrides_styless.gif)
Theme overrides can be imported from one platform to another.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/theme_overrides_styles.png)
### Preview
When you are editing a process in **UI Designer** you have the possibility of having the preview of multiple themes:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/preview_theme_ui_designer.gif)
Overrides are completely independent of the theme, regardless of which theme you choose in the preview mode.
### Layout
There are two main types of layouts for organizing child elements: **Linear** and **Grid**.
* **Linear Layout**: Arranges child elements in a single line, either horizontally or vertically. Ideal for simple, sequential content flow.
* **Grid Layout**: Organizes elements into a structured grid with multiple columns and rows, useful for more complex, multi-dimensional designs.
* **Platform-Specific Layouts**: You can customize layout settings per platform (e.g., Grid on web, Linear on mobile) to ensure optimal responsiveness.
Both layouts offer options to customize direction, alignment, spacing, and wrap behavior for flexibility in design.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/set_linear.png)
### Sizing
By setting desired values for these props, you can ensure that all UI elements on the interface are the desired size and perfectly fit with each other.
When adjusting the Fit W and Fit H settings, users can control the size and shape of the elements as it appears on their screen:
* Fit W: fill, fixed or auto
* Fit H: fill, fixed or auto
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/ui_sizing.gif)
### Spacing
Margin and padding are CSS properties used to create space between elements in a web page:
* **margin** - the space outside an element
* **padding** - the space inside an element
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/ui_spacing.gif)
### Advanced
* **Advanced** - for advanced customization, users can add CSS classes to pre-defined components, this option is available under the **Advanced** section
By utilizing these styling options in FLOWX.AI, users can create unique and visually appealing interfaces that meet their design requirements.
## Tree view
The Tree View panel displays the component hierarchy, allowing users to easily navigate through the different levels of their interface.
Clicking on a specific component in the tree will highlight the selection in the editor, making it easy to locate and modify.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/ui_designer_tree1.gif)
## UI component types
Different UI component types can be configured using UI Designer. The UI components are available and can be configured only using **user task nodes** or **navigation areas**.
Depending on the component type different properties are available for configuration.
Understanding these component types will help you to better utilize the UI Designer tool and create rich web interfaces.
* [Container](./ui-component-types/root-components/container)
* [Card](./ui-component-types/root-components/card)
* [Custom](./ui-component-types/root-components/custom)
* [Collection](./ui-component-types/collection/collection)
* [Collection Prototype](./ui-component-types/collection/collection_prototype)
* [Button](./ui-component-types/buttons)
* [File Upload](./ui-component-types/buttons#file-upload)
* [Image](./ui-component-types/image)
* Text
* Link
Form elements are a crucial aspect of creating user interfaces as they serve as the means of collecting information from the users. These elements come in various types, including simple forms, [inputs](./ui-component-types/form-elements/input-form-field), [text areas](./ui-component-types/form-elements/text-area), drop-down menus ([select](./ui-component-types/form-elements/select-form-field)), [checkboxes](./ui-component-types/form-elements/checkbox-form-field), [radio buttons](./ui-component-types/form-elements/radio-form-field), toggle switches ([switch](./ui-component-types/form-elements/switch-form-field)), [segmented buttons](./ui-component-types/form-elements/segmented-button), [sliders](./ui-component-types/form-elements/slider) and [date pickers](./ui-component-types/form-elements/datepicker-form-field). Each of these form elements serves a unique purpose and offers different options for capturing user input.
* Form
* [Input](./ui-component-types/form-elements/input-form-field)
* [Textarea](/4.0/docs/building-blocks/ui-designer/ui-component-types/form-elements/text-area)
* [Select](./ui-component-types/form-elements/select-form-field)
* [Checkbox](./ui-component-types/form-elements/checkbox-form-field)
* [Radio](./ui-component-types/form-elements/radio-form-field)
* [Switch](/4.0/docs/building-blocks/ui-designer/ui-component-types/form-elements/switch-form-field)
* [Segmented button](/4.0/docs/building-blocks/ui-designer/ui-component-types/form-elements/segmented-button)
* [Slider](/4.0/docs/building-blocks/ui-designer/ui-component-types/form-elements/slider)
* [Datepicker](./ui-component-types/form-elements/datepicker-form-field)
* [Message](./ui-component-types/indicators)
Navigation areas
* Page
* Stepper
* Step
* Modal
* Container
# Validators
Validators are an essential part of building robust and reliable applications. They ensure that the data entered by the user is accurate, complete, and consistent. In Angular applications, validators provide a set of pre-defined validation rules that can be used to validate various form inputs such as text fields, number fields, email fields, date fields, and more.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/validators_gen.png)
Angular provides default validators such as:
## Predefined validators
This validator checks whether a numeric value is smaller than the specified value. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a required validator.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/validator_min.png)
This validator checks whether a numeric value is larger than the specified value. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a [required](#required-validator) validator.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/validator_max.png)
This validator checks whether the input value has a minimum number of characters. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a required validator.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/validator_minlength.png)
This validator checks whether the input value has a maximum number of characters. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a [required](#required-validator) validator.
![maxlength validator](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/validator_maxlength.png)
This validator checks whether a value exists in the input field.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/validatorss.png)
It is recommended to use this validator with other validators like [minlength](#minlength-validator) to check if there is no value at all.
![required validator](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/validators.png)
This validator checks whether the input value is a valid email. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a [required](#required-validator) validator.
![email validator](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/validator_email.png#center)
This validator checks whether the input value matches the specified pattern (for example, a [regex expression](https://www.regexbuddy.com/regex.html)).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/validator_pattern.png#center)
Other predefined validators are also available:
This validator can be used to validate [datepicker](/4.0/docs/building-blocks/ui-designer/ui-component-types/form-elements/datepicker-form-field) inputs. It checks whether the selected date is today or in the past. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a [required](#required-validator) validator.
![isSameOrBeforeToday](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/validator_issameday.png)
This validator can be used to validate datepicker inputs. It checks whether the selected date is today or in the future. If there are no characters at all, this validator will not trigger. It is advisable to use this validator with a [required](#required-validator) validator.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/validator_issamedayafter.png)
To ensure the validation of all form elements within a card upon executing a Save Data action such as “Submit” or “Continue,” follow these steps:
* When adding a UI action to a button inside a card, locate the dropdown menu labeled **Add form to validate**.
* From the dropdown menu, select the specific form or individual form elements that you wish to validate.
* By choosing the appropriate form or elements from this dropdown, you can ensure comprehensive validation of your form.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/input_validators.png)
## Custom validators
Additionally, custom validators can be created within the web application and referenced by name. These custom validators can have various configurations such as execution type, name, parameters, and error message.
1. **Execution type** - sync/async validator (for more details check [this](https://angular.io/api/forms/AsyncValidator))
2. **Name** - name provided by the developer to uniquely identify the validator
3. **Params** - if the validator needs inputs to decide if the field is valid or not, you can pass them using this list
4. **Error Message** - the message that will be displayed if the field is not valid
The error that the validator returns **MUST** match the validator name.
![custom validator](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/validator_custom.png#center)
#### Custom validator example
Below you can find an example of a custom validator (`currentOrLastYear`) that restricts data selection to the current or the previous year:
##### currentOrLastYear
```typescript
currentOrLastYear: function currentOrLastYear(AC: AbstractControl): { [key: string]: any } {
if (!AC) {
return null;
}
const yearDate = moment(AC.value, YEAR_FORMAT, true);
const currentDateYear = moment(new Date()).startOf('year');
const lastYear = moment(new Date()).subtract(1, 'year').startOf('year');
if (!yearDate.isSame(currentDateYear) && !yearDate.isSame(lastYear)) {
return { currentOrLastYear: true };
}
return null;
```
##### smallerOrEqualsToNumber
Below is another custom validator example that returns `AsyncValidatorFn` param, which is a function that can be used to validate form input asynchronously. The validator is called `smallerOrEqualsToNumber` and takes an array of `params` as an input.
For this custom validator the execution type should be marked as `async` using the UI Designer.
```typescript
export function smallerOrEqualsToNumber (params$: Observable[]): AsyncValidatorFn {
return (AC): Promise | Observable => {
return new Observable((observer) => {
combineLatest(params$).subscribe(([maximumLoanAmount]) => {
const validationError =
maximumLoanAmount === undefined || !AC.value || Number(AC.value) <= maximumLoanAmount ? null : {smallerOrEqualsToNumber: true};
observer.next(validationError);
observer.complete();
});
});
};
}
```
If the input value is undefined or the input value is smaller or equal to the maximum loan amount value, the function returns `null`, indicating that the input is valid. If the input value is greater than the maximum loan amount value, the function returns a `ValidationErrors` object with a key `smallerOrEqualsToNumber` and a value of true, indicating that the input is invalid.
For more details about custom validators please check this [link](../../../sdks/angular-renderer).
Using validators in your application can help ensure that the data entered by users is valid, accurate, and consistent, improving the overall quality of your application.
It can also help prevent errors and bugs that may arise due to invalid data, saving time and effort in debugging and fixing issues.
# Fonts
Fonts management allows you to upload and manage multiple font files, which can be later utilized when configuring UI templates using the UI Designer.
## Managing fonts
The "Font Management" screen displays a table with uploaded fonts. The following details are available:
* **FontFamily**: The names of the uploaded font families.
* **File Name**: The name of the font file.
* **Weight**: The weight of the font, represented by a numeric value.
* **Style**: The style of the font, such as "italic" or "normal".
* **Actions**: This tab contains options for managing the uploaded fonts, such as deleting or downloading them.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/fonts4.5.0.png)
## Uploading fonts
### Uploading theme font files
To upload new theme font files, follow these steps:
1. Open **FLOWX Designer**.
2. Navigate to the **Content Management** tab and select **Font files**.
3. Click **Upload font** and choose a valid font file.
The ccepted font format is TTF (TrueType Font file).
4. Click **Upload**. You can upload multiple TTF font files.
5. For each uploaded font file, the system will automatically identify information such as font family, weight, and style.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/fonts-metadata.png)
## Exporting fonts
You can use the export feature to export a JSON file containing all the font files.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/export_fonts.png)
The exported JSON will look like this:
```json
{
"fonts": [
{
"fontFamily": "Open Sans",
"filename": "OpenSans-ExtraBoldItalic.ttf",
"weight": 800,
"style": "italic",
"size": 135688,
"storagePath": "https://d22tnnndi9lo60.cloudfront.net/devmain/flowx/fonts-folder/1690383294848_OpenSans-ExtraBoldItalic.ttf",
"contentType": "font/ttf",
"application": "flowx",
"flowxUuid": "ce0f75e2-72e4-40e3-afe5-3705d42cf0b2"
},
{
"fontFamily": "Open Sans",
"filename": "OpenSans-BoldItalic.ttf",
"weight": 700,
"style": "italic",
"size": 135108,
"storagePath": "https://d22tnnndi9lo60.cloudfront.net/devmain/flowx/fonts-folder/1690383295987_OpenSans-BoldItalic.ttf",
"contentType": "font/ttf",
"application": "flowx",
"flowxUuid": "d3e5e2a0-958a-4183-8625-967432c63005"
}
//...
],
"exportVersion": 1
}
```
## Importing fonts
You can use the import feature to import a JSON file containing the font files. If a font file already exists, you will be notified.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/import_fonts.png)
## Using fonts in UI Designer
For example, let's take an input UI element; you can customize the typography for this UI element by changing the following properties:
* Label:
* Font family
* Style and weight
* Font line size (px)
* Font line height (px)
* Text:
* Font family
* Style and weight
* Font line size (px)
* Font line height (px)
* Helper & errors:
* Font family
* Style and weight
* Font line size (px)
* Font line height (px)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/usinf_fonts.gif)
# System assets
System assets serve as a centralized hub for managing and organizing various types of media files outside an application, used on themes, including images, GIFs, and more.
# Themes
Theme management feature enables you to easily change the appearance and styling of your application. You can personalize the look and feel of your application to your branding, preferences, or specific requirements.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/themes4.5.0.png)
## Key features of Theme Management
1. **Theme Management:**
* Creation, editing, and management of themes.
* Selection of predefined themes or customization of themes from scratch.
2. **Customization Options:**
* Modification of color schemes, typography, spacing, and visual effects.
* Upload of custom assets like fonts and icons.
3. **Overrides and Variations:**
* Ability to override default UI components styles properties on specific elements or sections.
* Creation of different themes to accommodate different users/clients preferences.
4. **Platform Consistency:**
* Consistency of theme styles across different platforms and devices.
5. **Preview:**
* Real-time visualization of theme changes.
6. **Export/Import Functionality:**
* Export of themes for backup, sharing, or reuse across multiple environments (UAT, DEV, etc.).
* Import of exported from other environments.
## Creating a new theme
You have two options for creating the theme. You can import a theme that was exported from another environment (for example, your UAT/DEV) or to manually create it.
To successfully create a new theme in FlowX Designer, follow these steps:
Locate the "Create New" button positioned above the right of the **Themes** list table.
Click the "Create New" button and enter details for your theme:
* **Theme Name** - pick a name for your theme
* **Font Family** - select your desired font family, by default, the system provides "Open Sans"
If you wish to add a new font, click on the link provided under the **Font Family** field, which redirects you to the **Fonts management** Selection.
* **Choose your primary color** - the default color is `#006BD8`.
Verify that the color format is in **HEX**. If not, an error message will indicate "Please insert a HEX color."
## Configuring a new theme
After creating a theme, you must configure it. To configure your theme effectively, follow these steps:
* Navigate to the settings or customization section of your application (in the UI Designer).
* Look for options related to styling and think of an overall design
The Themes styles mechanism is based on hierarchy. In this hierarchy we have the following elements: Design Tokens, Global Settings and Components.
Modify color schemes (using the design tokens), typography, spacing, and other visual elements to match your desired look and feel.
Use the provided tools or controls to adjust theme settings. This might include sliders, color pickers, or dropdown menus.
**The Design Tokens** represent values based on which the theme is built.
* **Color Palette, Shadows, Typography Tokens**: Configure these tokens based on your company's brand guidelines. They ensure reusability and consistency.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/design_tokens_theme.gif)
The **Global Settings** are properties that inherit values from the **Design Tokens** and sit on the top of the hierarchy. These properties are then inherited by the **Components**.
* **Platform-specific Settings**: Configure settings for each platform (web, iOS, Android) based on the global settings you've defined.
* **Styles and Utilities**: General settings applying to all components (styles) and labels, errors, and helper text settings (utilities).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/global_settings_theme.gif)
When setting up your theme, remember that different platforms like web, iOS, and Android have their own settings. You need to configure each platform separately. Only color settings are the same across all platforms.
For example, you can configure a web theme, and then leverage the push and import options. You can push from the web the same configuration to iOS or Android.
**Component-level Configuration**: Customize the style of each component type.
Keep in mind, there are differences between platforms, for example, for button configuration there are different properties available. What you configure on a platform will not be inherited by the others.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/platform-components.gif)
* Before finalizing the theme configuration, it's crucial to review how the changes will appear across different platforms. This step ensures consistency and allows for platform-specific adjustments if needed.
* You can do that by either using the preview feature from **Themes** or by using the preview mode in the **UI Designer** by switching to your preffered platform.
Keep in mind that the preview generated in the UI Designer for iOS and Android platforms is an estimate meant to help you visualize how it might look on a mobile view.
Here is a quick walkthrough video on how to create and configure a theme:
## Managing themes - process level (theme overrides)
With the Overrides feature you have now the possibility to override default theme settings on specific elements or sections.
Use Theme Overrides in UI Designer to adjust styles or props for specific UI elements based on the desired platform (Web, iOS and Android).
All components can now be styled with token overrides, for color, typography and shadow settings defined in the theme.
**Theme overrides** in **UI Designer** are applied to the component itself, rather than to specific themes. This means that when switching the view to another theme, the overrides persist and apply to the new theme as well.
### Styles tab
The Styles tab functions independently for three platforms: Web, iOS, and Android. Here, you can customize styles for each UI component on each platform.
If you want to customize the appearance of a component to differ from the theme settings, you must apply a **Theme Override**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/theme_overrides_styless.gif)
Theme overrides can be imported from one platform to another.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/theme_overrides_styles.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/ui_designer_panel2.png)
Preview mode: In the UI Designer, overrides are entirely independent of the theme. Regardless of the theme selected in preview mode, you will see the applied override reflected at the UI Designer level.
### Preview
When you are editing a process in **UI Designer** you have the possibility of having the preview of multiple themes:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/preview_theme_ui_designer.gif)
Overrides are completely independent of the theme, regardless of which theme you choose in the preview mode.
## Using a Theme in the container application
To integrate the theme into your container application, follow these steps:
* Copy the unique identifier (UUID) associated with the theme.
* Set the copied UUID within your container application.
* By doing so, ensure that the renderers within your application can recognize and apply the specified theme.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/copy_uuid.gif)
## Exporting/importing a theme
The Export/Import functionality in the theme management system allows users to export themes for various purposes such as backup, sharing, or reuse across multiple environments.
Additionally, it enables the seamless import of themes previously exported from other environments, facilitating swift integration and continuity across design workflows.
Import is restricted to internal FlowX mechanisms only; themes from external sources like Figma or Zeplin are not supported.
### Exporting a theme
Navigate to the **Theme Management** panel within FlowX Designer.
Select the theme(s) you wish to export.
From the breadcrumbs menu on the right, select **Export Theme**.
The exported theme is saved in a standard format (JSON) and can be downloaded to a local directory or storage location.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/exporting_a_theme.gif)
### Importing a theme
Navigate to the **Theme Management** panel within FlowX Designer.
From the contextual menu on the right, select **Import Theme**.
Import it as a new theme or if the theme already exists in other environment, you can override it.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/importing_a_theme.gif)
### Setting a default theme
You can easily establish a default theme by accessing the contextual menu on the right side of a theme and selecting "Set as Default."
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/default_th.png)
When a default theme is not set (or you haven't created a theme yet), the platform automatically assigns the FlowXTheme, which is the platform's default theme. This ensures that there's always a default theme in place to provide a consistent appearance across processes and interactions within the application.
In case you select a specific default theme but later you delete it, the platform will revert to the FlowX theme as the default. This safeguard ensures that there's always a default theme available, even if you remove your custom selection.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/2024-04-04%2019.00.33.gif)
Upon opening any process within the UI Designer, the default theme is displayed as the initial preview. This gives users a clear starting point and ensures consistency in the appearance of the process until further customization is applied.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/default2.png)
When creating a new process, you will notice the Default Theme (*FlowXTheme*) as the default preview.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/default1.png)
Furthermore, when you start a process definition, the theme switch defaults to the default theme in the run process popup from the process definitions list. This ensures that the default theme is consistently applied during the execution of processes, maintaining visual coherence in user interactions.
# Adding a new node
Once you create a new process definition, you can start configuring it by adding new nodes.
You can choose between a series of available node types below. For an overview of what each node represents, see [BPMN 2.0 basic concepts](../../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn):
A BPMN (Business Process Model and Notation) node is a visual representation of a point in your process. Nodes are added at specific process points to denote the entrance or transition of a record within the process. FlowX supports various node types, each requiring distinct configurations to fulfill its role in the business flow.
### Steps for creating a new node
To create a new node on an existing process:
Open **FlowX.AI Designer** and from the **Processes** tab select **Definitions**.
Select your **process definition** or create a new one.
Make sure you are in edit mode.
Drag and drop one or more **node** elements.
To connect the node that you just created:
* Click the node, select the **arrow** command
* Click the node that you wish to link to the newly added node
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/mapf_add_new_node.gif)
Depending on the type of the **node**, you can define some node details, and a set of values (stages, data stream topics, key name) and you can also add various [actions](../../building-blocks/actions/actions) to it.
Now, check the next section to learn how to add an action to a node.
# Adding an action to a node
We use actions to add business decisions to the flow or link the process to custom integrations and plugins.
A node action is defined as the activity that a node has to handle within a process flow. These actions can vary in type and are utilized to specify communication details for plugins or integrations, include business rules in a process, save data and send various data to be displayed in front-end applications.
For more information about actions, check the following section:
### Steps for creating an action
To create an action:
Open **FlowX.AI Designer** and from the **Processes** tab select **Definitions**.
Select your **process definition**.
Click the **Edit** **process** button.
Add a new **node** or edit an existing one.
The nodes that support actions are [task nodes](../../building-blocks/node/task-node), [user task nodes](../../building-blocks/node/user-task-node), and [send message/receive message tasks](../../building-blocks/node/message-send-received-task-node).
Add an **action** to the node and choose the **action type**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/mapf_add_node_action.gif)
A few **action parameters** will need to be filled in depending on the selected action type.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/actions_paramss.png)
# Adding more flow branches
To split the Process flow into more steps, you just need to use a parallel gateway node type.
![Parallel Gateway](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/flowx-designer/process_flowx_parallel.png#center)
### Steps for creating a flow with two branches
To create a flow with two branches:
Open **FlowX Designer** and go to the **Definitions** tab.
Click on the **New process** button, using the **breadcrumbs** from the top-right corner.
Add a **start node** and a **parallel gateway node**.
Create two parallel zones by adding different nodes and link the zones after the **parallel gateway node**.
Add another **parallel gateway** to merge the two flow branches back into one branch.
Add an **end node**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/parallel_zn.png)
When working with parallel gateways, tokens play a critical role in ensuring that the process flow is managed correctly. Here's an overview of token behavior in parallel gateways:
When a process reaches a parallel gateway, the gateway creates child tokens for each branch in the parallel paths. Each path operates independently.
Each child token advances through its respective path independently, proceeding from one node to the next based on the sequence and actions defined in the process.
A closing parallel gateway node is used to merge parallel paths back into a single flow. The parent token waits at this closing gateway until all child tokens have completed their respective paths.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/token_parallel.png)
# Creating a new process definition
The first step of defining your business process in the FlowX.AI Designer is adding a new process definition for it.
This should include at least one [START](../../building-blocks/node/start-end-node#start-node) and [END](../../building-blocks/node/start-end-node#end-node) node.
## Steps for creating a new process definition
A process definition is the core building block of the platform, serving as the blueprint of a business process composed of nodes linked by sequences. Once defined and published on the platform, a process can be executed, monitored, and optimized. Starting a business process results in the creation of a new instance of this definition.
To create a new **process definition**:
Open **FlowX.AI Designer** and go to the **Definitions** tab.
Click the **New process** button, using the **breadcrumbs** from the top-right corner.
Enter a unique name for your process and click **Create**.
You're automatically taken to the **FlowX.AI Process Designer** editor where you can start building your process.
![Creating a process definition](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/mapf_new_def.gif)
In the following section, you will learn how to add a new node to your newly created process.
# Creating a user interface
You can configure interfaces for both generated and custom screens in FlowX Designer.
Create a simple process:
Go to **FlowX Designer** and navigate to the **Definitions** tab.
Click on the **New Process** button, using the **breadcrumbs** in the top-right corner.
Add a **Start Node**.
Add two **User Tasks** that will represent the screens of the application.
Finish your BPMN process with an **End Node**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/create_ui.png)
Now create a **Navigation Area** (Page) where we will include our user tasks.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/create_ui1.gif)
In the FlowX Designer, you can create the following navigation areas:
* **Stepper**: Breaks progress into logical, numbered steps for intuitive navigation.
* **Tab Bar**: Allows users to switch between different sections or views within the application.
* **Page**: Displays full-page content for an immersive experience.
* **Modal**: Overlays that require user interaction before returning to the main interface.
* **Zone**: Groups specific navigation areas or tasks, like headers and footers.
* **Parent Process Area**: Supports subprocess design under a parent hierarchy, ensuring validation and design consistency.
## Configuring the UI
All visual properties of the UI elements and navigation areas are configured using the using the **FlowX UI Designer**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/where_ui1.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/where_ui2.png)
### Navigation type
To begin, we need to define the type of navigation for our page application. The options are:
* Single page form
* Wizard
We will use the **Wizard** type for our example.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/create_ui3.png)
### Configuring the first screen (card)
Open the **UI Designer** for your first **user task**. This will represent the **first card**.
Add a **CARD** element to the UI.
Add a **Form** to the card to group the inputs.
Add an **input** field to the form.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/ui_card1.png)
Add a button with a save data action to advance to the next screen and save the input data.
First, configure the action at the node level. The action, called when the button is clicked, should be **Manual** (not automatic because it is triggered by a user).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/create_ui_save_data.png)
### Testing the first screen
Start the process definition that you just configured.
The card with the **Form** and the **Input** is displayed.
Test the **Input**.
![Test the input](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/test_card_ui.gif)
## Configuring the second screen (card)
Go to your second **user task** and add a new **CARD**.
Add other UI elements of your choice.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/create_ui4.png)
### Testing the final result
Start the process definition again to review the final result:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/final_rez.gif)
# Exporting / importing a process definition version
To export process definitions versions and move them between different environments, you can use the export version/ import process feature.
## Export a process definition
You can export a version of your process definition as a JSON file directly from the versioning menu in the **FlowX Designer**:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/export_process_version.png)
## Import a process definition
Given you have exported a version from another environment, when you press the "Import Process" button from the process definition list, then the system opens a local file browser with available JSON files.
There are multiple scenarios you can encounter:
* [**New process definition**](#new-process-definition)
* [**Existing process definition with no additional versions**](#existing-process-definition-with-no-additional-versions)
* [**Existing process definition with additional version**](#existing-process-definition-with-additional-version)
### New process definition
The process definition does not exist in the target environment and when the file is submitted then the process definition is added to the target environment and in the branching tree the imported version is displayed as active and unavailable versions as inactive/ placeholders.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/import_proc_version.png)
#### Unavailable versions
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/unavailable_version.png)
### Existing process definition with no additional versions
The process definition exists in the target environment and does not contain additional versions compared to the ones in the import file. When you submit the file, the branching tree is updated. Imported and existing versions are displayed as active, while unavailable versions are shown as inactive/placeholders.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/import_2.png)
Additionally, if the current publish policy is "latest submitted" or "latest work in progress" and the import file indicates that the publish version will be overwritten, you'll see information about the new published version.
You'll also have the option to update the publish policy:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/case2_update.png)
### Existing process definition with additional version
You have an existing process definition with additional versions, and you want to overwrite versions in conflict while being warned about the published version.
The process definition in the target environment contains additional versions compared to the ones in the import file. These versions are children of parent versions that should receive other children versions after import. In this case, you'll see a message about overwritten versions.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/Frame%2022749.png)
In case the process definition was exported using an incompatible FlowX Designer version, you will receive an error and not be able to import it. It will first need to be adjusted to match the format needed by your current FlowX Designer version.
# Handling decisions in the flow
To add business decisions in the flow and use them to pick between a flow branch or another, we can use exclusive gateways.
![Exclusive Gateway](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/node/gateway_exclusive.png#center)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/xclsv_mapf.png)
### Steps for creating a flow with exclusive branches
To create flow with exclusive branches:
Open **FlowX Designer** and go to the **Definitions** tab.
Click on the **New process** button, using the **breadcrumbs** from the top-right corner.
Add a **start node** and an **exclusive gateway node**.
Add two different **task nodes** and link them after the **exclusive** **gateway node**.
Add a new **exclusive gateway** to merge the two flow branches back into one branch.
Add a **new rule** to a node to add a **business decision**:
For [business rules](../../building-blocks/actions/business-rule-action/business-rule-action), you need to check certain values from the process and pick an outgoing node in case the condition is met. The gateway node must be connected to the next nodes before configuring the rule.
* select a **scripting language** from the dropdown, for example `MVEL` and input your condition:
* `input.get("application.client.creditScore") >= 700` ← proceed to node for premium credit card request
* `input.get("application.client.creditScore") < 700` ← proceed to node for standard credit card request
Add a **closing exclusive gateway** to continue the flow.
Add and **end node**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/mapf_gateway_condition.png)
# Moving a token backwards in a process
Back in steps is a functionality that allows you to go back in a business process redo a series of previous actions in the process.
**Why is it useful?** Brings a whole new level of flexibility in using a business flow or journey, allowing the user to go back a step without losing all the data inputted so far.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/allow_back_action.png)
In most cases, the **token** instance will just need to advance forward in the process as the actions on the process are completed.
But there might be cases when the token will need to be moved backward in order to redo a series of previous actions in the process.
We will call this behavior **resetting the token**.
The token can only be reset to certain actions on certain process nodes. These actions will be marked accordingly with a flag `Allow BACK on this action?`.
When such an action is triggered from the application, the current process token will be marked as aborted and a new one will be created and placed on the node that contains the action that was executed. If any sub-processes were started between the two token positions, they will also be aborted when the token is reset.
The newly created token will copy from the initial token all the information regarding the actions that were performed before the reset point.
There are a few configuration options available in order to decide which of the data to keep when resetting the token:
* `Remove the following objects from current state`: Process keys that should be deleted when the user is navigating back tho this action
* `Copy the following objects from current state`: Process keys that should retain their data as persisted prior to the user navigating back to this action.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/flowx-designer/pf_moving_token_bw.gif)
# Initiating processes
Entering the realm of FlowxX unlocks a spectrum of possibilities for elevating processes and workflows. From automation to data-driven decision-making, several straightforward approaches pave the way for leveraging this platform efficiently. Let's delve into the ways to kickstart a process.
## Kafka event
To trigger a process using a Kafka Send Action:
1. Access FlowX Designer and navigate to the Processes tab, then select Definitions.
2. Choose an existing process definition or create a new one.
3. Integrate a Message Event Send node into your workflow.
4. Attach a Kafka Send action to this node.
5. Define the topic corresponding to the `KAFKA_TOPIC_PROCESS_START_IN` environment variable from your process-engine deployment.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/topic_address.png)
For clarification on the topic, in FLOWX.AI Designer visit **Platform status → Flowx Components → process-engine-mngt -> kafkaTopicHealthCheckIndicator → details → configuration → topic → process → start\_in**:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/topic_start_process.png)
6. The body message must include the name of the process you intend to start with this action, structured as follows:
```json
{"processName": "your_process_name"}
```
7. Expand advanced configuration, you will see that a custom header is always set by default to `{"processInstanceId": ${processInstanceId}}`
8. Also include your JWT key in the headers:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/jwt_start.png)
The headers section should resemble this structure:
```json
{"processInstanceId": ${processInstanceId}, "jwt": "your_jwt"}
```
## Timer start event
To initiate a process using a Start Timer Event:
1. Open FLOWX Designer, head to the Processes tab, then select Definitions.
2. Opt for an existing process definition or create a new one.
3. Incorporate a Start Timer Event and configure it as required, specifying either a specific date or a cycle.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/start_timer_process.png)
Starting a process through registered timers necessitates sending a process start message to Kafka, requiring a service account and authentication. For detailed guidance, refer to:
[**Service Accounts**](../../../setup-guides/access-management/configuring-an-iam-solution#scheduler-service-account)
For deeper insights into the Start Timer Event, refer to the section below:
[Start Timer Event](../../building-blocks/node/timer-events/timer-start-event)
## Message catch start event
To initiate a process using a Message Catch Start Event, two processes are required. One utilizes a throw message event, while the other employs a start catch message event to initiate the process.
### Configuring the parent process
1. Access FlowX Designer, proceed to the Processes tab, then select Definitions.
2. Opt for an existing process definition or create a new one.
3. Configure your process and integrate a Message Throw Intermediate event.
4. Add a task or a user task where process data bound to a correlation key is included (e.g., 'key1').
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/correlation_data.png)
5. Configure the node, considering message correlation.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/message_correlation.png)
Message correlation is vital and achieved through message subscriptions, involving the message name (must be identical for both throw and catch events) and the correlation key (also known as the correlation value).
### Configuring the process with catch event
Now, we will configure the process that will be started with the start catch message event:
1. Follow the initial three steps from the previous section.
2. Integrate a Start Message Catch event node.
3. Configure the node:
* Include the same message name for correlation as added in the throw message event (e.g., 'start\_correlation').
* In the Receive data tab, add the Process Key, which is the correlation key added in the throw event (e.g., 'key1').
Once both processes are configured, commence the parent process. At runtime, you'll notice the initiation of the second process:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/start_with_message_event.gif)
## Task management using Hooks
Initiating processes through hooks involves the creation of a hook alongside two essential processes: one acts as the parent process, while the other is triggered by the hook.
### Creating a hook
Hooks play a crucial role in abstracting stateful logic from a component, facilitating independent testing and reusability.
Users granted task management permissions can utilize hooks to initiate specific process instances, such as triggering notifications upon event occurrences.
Follow the next steps to create a hook:
1. **Create a Hook**: Access FlowX Designer, navigate to the Plugins tab, and choose Task Manager → Hooks.
2. **Configure the Hook**:
* Name: Name of the hook
* Parent process: Process definition name of the parent process
* Type: *Process hook*
* Trigger: *Process Created*
* Triggered Process: Process definition name of the process that we want to trigger
* Activation status
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/hook_created%20copy.png)
For further details about hooks, refer to the section below:
[Hooks](../../platform-deep-dive/plugins/custom-plugins/task-management/using-hooks)
### Setting up the parent process
1. In FlowX Designer, navigate to the Processes tab and select Definitions.
2. Choose an existing process definition or create a new one.
3. Customize your BPMN process to align with your requirements.
4. Ensure the process is integrated with task management. To do this, within your Process Definition, access Settings → General and activate **"Use process in task management"**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/use_in_task_management.gif)
Establishing appropriate roles and permissions within the parent process (or the service account used) is mandatory to enable it to trigger another process.
Now proceed to configure the process that the hook will trigger.
### Configuring the triggered process
To configure the process triggered by the hook, follow the initial three steps above. Ensure that the necessary roles and permissions are set within the process.
Upon running the parent process, instances will be created for both the parent and the child processes.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/triggered_process_hook.gif)
# FlowX.AI Designer
The FlowX.AI Designer is a collaborative, no-code, web-based application development environment, designed to facilitate the creation of web and mobile applications without the need for coding expertise.
![FlowX.AI Designer](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/designer_new.png)
# Overview
Let's go through the main options available in the **FlowX Designer**:
#### Process Definitions
* create, view, run and edit [processes](../building-blocks/process/process-definition)
* view versioning history
#### Active Process
* view active [process instances](../building-blocks/process/process-instance)
* [token](../building-blocks/token) instance and its content
* [subprocesses](../building-blocks/process/subprocess)
#### Enumerations
* nomenclature containing static value definitions
* used to manage a list of values that can be used as content in UI components or templates
#### Substitution tags
* used to generate dynamic content across the platform
* list of values used for localization
#### Languages
* enumeration values can be defined for a specific language
#### Source systems
* used for multiple source systems, if multiple [**enumerations**](../platform-deep-dive/core-extensions/content-management/enumerations) values are needed to communicate with other systems
[Example here](../platform-deep-dive/core-extensions/content-management/content-management)
#### Media Library
* serves as a centralized hub for managing and organizing various types of media files, including images, GIFs, and more
#### Font files
* Font management allows you to upload and manage multiple font files, which can be later utilized when configuring UI templates using the UI Designer
#### Themes
* The Theme Management feature allows for the personalization of application appearance through easy management and customization of themes.
#### Task Management
* it is a **plugin** suitable for back-officers and supervisors as it can be used to easily track and assign activities/tasks inside a company
* for more information, check the [Task Management](../platform-deep-dive/plugins/custom-plugins/task-management/) section
#### Notification templates
* send various types of notifications: SMS, push notifications to mobile devices, emails
* forward custom notifications to external outgoing services
* generate and validate [OTP](../platform-deep-dive/plugins/custom-plugins/notifications-plugin/otp-flow/) passwords for user identity verification
* for more information, check the [Notification templates plugin](../platform-deep-dive/plugins/custom-plugins/notifications-plugin/notifications-plugin-overview) section
#### Document templates
* store and make changes to documents
* generate documents based on predefined templates (docx or HTML) and custom process related data
* convert documents between various formats
* splitting bulk documents into smaller separate documents
* editing documents to add generated barcodes/signatures and pictures
* for more information, check the [Document templates plugin](../platform-deep-dive/plugins/custom-plugins/documents-plugin/documents-plugin-overview) section
#### Configuration parameters
* you can add configuration parameters by defining key-value pairs
* they are used for values that might change from one environment to another
* for example, an URL that has different values from a development environment to a production environment
#### Access management
* Access Management is used to administrate users, roles and groups
* Access Management is accessing keycloak through an API call, extracting all the necessary details
* it is based on user roles that need to be configured in the identity management solution
#### Integration management
* Integration management helps you configure integrations between the following components: [**FLOWX Process engine**](/4.0/docs/building-blocks/process/process), [**plugins**](/4.0/docs/platform-deep-dive/plugins/custom-plugins), or different adapters
* Integration management enables you to keep track of each integration and its correspondent component and different scenarios used: creating an OTP, document generation, notifications, etc
Platform status
* you can check the platform's health by using the **Platform Status** feature
* you can also check the installed versions against the suggested versions for each FlowX component
With the FlowX Designer, you can:
* Develop **processes** based on [BPMN 2.0](../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn)
* Configure interfaces for the processes for both generated and custom screens
* Define **business rules** and validations via [DMN](../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-dmn) files or via [MVEL](../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-mvel), or using other supported scripting languages
* Create integration connectors in a visual manner
* Create **data models** for your applications
* Adding new capabilities by using [plugins](../platform-deep-dive/plugins/custom-plugins)
* Manage users access
![FLOWX Designer](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/flowx-designer/designer_overview.gif#center)
Depending on your access rights, some tabs might not be visible. For more information, check [Configuring access rights for Admin](../../setup-guides/access-management/configuring-access-rights-for-admin) section.
## Managing process definitions
A **process definition** is uniquely identified by its name and version number.
![Process Definitions](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/flowx-designer/designer_process_definitions.gif)
## Viewing active process instances
The complete list of active **process instances** is visible from the FLOWX Designer. They can be filtered by **process definition** names and searched by their unique ID. You can also view the current process instance status and data.
![Active process](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/flowx-designer/designer_active_process.png)
## Managing CMS
Using the content management feature you can perform multiple actions that enable manipulation of the content and simplification of it. You need first to deploy the CMS service in your infrastructure, so you can start defining and using the custom content types described in the **Content Management** tab above.
![Content Management](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/flowx-designer/designer_cms.gif)
## Managing tasks
The Task Manager **plugin** has the scope to show a process that you defined in Designer, offering a more business-oriented view. It also offers interactions at the assignment level.
![Task Management](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/flowx-designer/designer_task_manager.png)
## Managing notification templates
The notification templates plugin can be viewed, edited, and activated/inactivated from the **FlowX Designer**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/flowx-designer/designer_notification_templates.png)
## Managing document templates
One of the main features of the [documents plugin](../platform-deep-dive/plugins/custom-plugins/documents-plugin/documents-plugin-overview) is the ability to generate new documents based on custom templates and prefilled with data related to the current process instance.
![Document templates plugin](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/flowx-designer/designer_documents.png)
## Managing generic parameters
Through the **FLOWX Designer**, you can edit generic parameters, and import or export them. You can set generic parameters and assign the environment where they should apply.
![Generic Parameters](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/flowx-designer/designer_generic_params.png)
The maximum length of an input value is 255 characters.
## Managing users access
Access Management is used to administrate users, roles and groups, directly in FLOWX Designer. Access Management helps you to access the identity management solution (keycloak/[RH-SSO](https://access.redhat.com/products/red-hat-single-sign-on)) through its API, extracting all the necessary details. Access Management is based on user roles that need to be configured in the identity management solution.
![Access Management](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/flowx-designer/designer_access_mng.png)
## Managing integrations
Integration management enables you to keep track of each integration and its correspondent component and different scenarios used: creating an OTP, document generation, notifications, etc.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/flowx-designer/designer_integrations.png)
## Checking platform status
You can quickly check the health status of all the **FlowX services** and all of your custom connectors.
![Platform Status](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/flowx-designer/designer_platform_status.png)
Check the next section to learn how to create and manage a process from scratch:
# Building with FlowX.AI
Let's explore how to build innovative solutions with FlowX.AI.
[Design a BPMN Process](../flowx-designer/managing-a-process-flow).
Define and manage a process flow using [**FLOWX Process Designer**](../building-blocks/process/process).
Run a process instance with [**FlowX Engine**](../platform-deep-dive/core-components/flowx-engine).
Create the **Front-End Application**.
Connect **Plugins**.
## FlowX.AI implementation methodology
The implementation of FlowX.AI follows a structured approach comprising several phases, using a hybrid methodology that has proven effective in past implementations. These phases include:
* Mobilization
* Analysis & Solution Design
* Project Execution
* Production & Go-live
* Transition to Business as Usual (BaU)
These phases address various aspects of the implementation process, ensuring a comprehensive and successful deployment of FlowX.AI solutions.
Explore our Academy course on Implementation Methodology for in-depth insights:
* What are the project stages in a FlowX implementation?
* What are the key roles of an implementation team?
* What are the main responsibilities of each role in the team?
## Designing the BPMN Process: Requesting a New Credit Card from a Bank App
Let's initiate by designing the BPMN process diagram for a sample use case: requesting a new credit card from a bank app.
## Sample Process Steps
Taking a **business process example** of a credit card application, it involves the following steps:
Create a business process example of a credit card application.
A user initiates a request for a new credit card - ***Start Event***
The user fills in a form with their personal data - ***User Task***
The bank system performs a credit score check automatically using a send event that communicates with the credit score adapter, followed by a receive event to collect the response from the adapter - ***Automatic Task***
The process bifurcates based on the credit score using an ***Exclusive Gateway***
Each branch entails a service task that saves the appropriate credit card type to the process data - ***Automatic Task***
The branches reconvene through a ***Closing Gateway***
The user views the credit card details and confirms - ***User Task***
After user confirmation, the process divides into two parallel branches - ***Parallel Gateway***. One registers the request in the bank's systems (bank system adapter/integration), and the other sends a confirmation email (notification plugin) to the user
An additional automatic task follows: a call to an external API to compute the distance between the user's address and the bank locations ([Google Maps Distance Matrix API](https://developers.google.com/maps/documentation/distance-matrix/overview)) - ***Automatic Task***
A task is utilized to sort the location distances and present the top three to the user - ***Automatic Task***
The user selects the card pickup point from the bank location suggestions - ***User Task***
A receive task awaits confirmation from the bank that the user has collected the new card, concluding the process flow - ***End Event***
## Sample Process Diagram
Here's what the **BPMN** diagram illustrates:
![Request a new credit card](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/request_a_credit_card_new.png)
# Learn more
Based on what you need to accomplish and understand, find below-suggested tracks you can follow. Choose the track that suits you best.
Take a look on the frameworks and standards used, our architecture and the latest features that we are releasing.
I want to design a process using FlowX.AI.
I want to build an application using FlowX.AI.
Find additional support when you're stuck.
# Introduction
FlowX.AI is an AI multi-experience development platform that sits on top of legacy systems and creates unified, scalable digital experiences.
# Why FlowX.AI?
* It captures and unifies data offering enterprises **AI-based optimization** and innovation capabilities.
* It **integrates easily with any infrastructure** and scales it as necessary.
It is a modern event-driven platform built on a **microservices architecture**. It uses the most popular **industry standards** for process modeling, business rule management and integrates as easily with legacy systems as with the latest APIs and RPAs.
Also, all applications you create are **containerized, portable, scalable, and resilient** out of the box. You’re free to deploy anywhere and scale to any size without redesign.
FlowX.AI can be deployed in a private cloud, in a public cloud or on-prem, depending on your requirements.
## Why does it matter?
FlowX.AI can be **deployed on top of existing legacy systems, so there is no need for costly or risky upgrade projects.** This helps lower the stress on the IT team and the technology budget since studies show that around 65-75% of the IT budget goes towards maintaining current infrastructure. Now, It’s not reasonable to expect enterprises are just going to rip and replace the legacy stack with new applications. They will do so at some point but for now they need something that enables them to run existing business and gives them some leeway or headspace to create modern digital experiences.
FlowX.AI platform brings **a layer of scalability to your existing stack**, beyond their current capabilities. This is thanks to our Kafka and Redis core that queue messages until the system is able to respond. And best of all, the app user is not experiencing any lag, since data is pre-pulled in the front-end ahead of his actions. A typical use case might sustain 100,000 users per minute, but of course, given our containerized architecture, it can be scaled even more.
**Unified interface across multiple systems or platforms** - often, say for an in-branch onboarding process, a teller has to use 4 or 5 different applications - to access various customer data such as a CRM, public reference checks, a KYC system and so on. With a process designed in FLOWX, you create just one application that unifies the purpose and the data from all those other applications. And this is very liberating for employees, it saves up time, eliminates the possibility of errors and overall, makes the experience of using the onboarding application a pleasant one.
With FlowX you **build omnichannel experiences across all digital channels**, be they web applications, mobile apps or in-branch terminals. What’s more, our applications are built with a hand-off capability - meaning the user can start the process on the web and then pick up on the mobile app later that evening.
**The UI is generated on the fly, by our AI model.** This means that you don’t need coding or design skills to create interfaces. Of course, you can inject your own code for CSS styling, apply your own design system with logo, corporate colors and fonts - but this is just if you want it. By default, you don’t need it.
And of course, when it comes to processes, **we support a no-code/full-code framework that makes the platform available to any citizen developer**. This brings speed to development, since there is no disconnect between business and IT, supports agile ways of working and overall, has a positive impact over creativity and innovation.
## Next steps
We’ll guide you through everything you need to know in order to understand FlowX, deploy it and use it successfully inside your organization.
If you have any questions regarding the content here or anything else that might be missing and you’d like to know, please [get in touch](mailto:support@flowx.ai) with us! We’d be happy to help!
So, to start with, let’s dive into FlowX.AI! :rocket:
So, to start with, let’s dive into FLOWX.AI and see what we can build! 🚀
Read about the frameworks and standards used to build the platform
Find about the core platform components
See the Release Notes
[](../docs/platform-overview/flowx-architecture)
Build and launch mission critical software products with FlowX- Learn and share tips and tricks with our community on **Discord**:
# FlowX.AI Engine
The engine is the core of the platform, it is the service that runs instances of the process definitions, generates UI, communicates with the frontend and also with custom integrations and plugins. It keeps track of all currently running process instances and makes sure the process flows run correctly.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Engine%20Diagram%20%281%29.png)
## Orchestration
Creating and interacting with process instances is pretty straightforward, as most of the interaction happens automatically and is handled by the engine.
The only points that need used interaction are starting the process and executing user tasks on it (for example when a user fills in a form on the screen and saves the results).
# FlowX.AI architecture
Let's delve into the core components that power the **FlowX.AI** platform, providing a comprehensive understanding of its capabilities and functionalities.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/architecture_diagram_22_01_24.png)
## FlowX.AI Designer
The [**FlowX.AI Designer**](../flowx-designer/overview) is a collaborative, no-code, web-based application development environment, designed to facilitate the creation of web and mobile applications without the need for coding expertise. It offers a wide range of capabilities:
* Develop processes based on [BPMN 2.0](./frameworks-and-standards/business-process-industry-standards/intro-to-bpmn) standards.
* Configure user interfaces for processes, both generated and custom.
* Define business rules and validations via [DMN](./frameworks-and-standards/business-process-industry-standards/intro-to-dmn) files or via the [MVEL](./frameworks-and-standards/business-process-industry-standards/intro-to-mvel), or other supported [scripting languages](../building-blocks/supported-scripts).
* Create [integration connectors](../platform-deep-dive/integrations) in a visual manner.
* Design data models for your applications.
* Adding new capabilities by using [plugins](../platform-deep-dive/plugins/custom-plugins).
* Manage users access roles effectively.
[**FlowX Designer**](../flowx-designer/overview) is built to administrate everything in FlowX.AI. It is a web application that runs in the browser, meaning that it resides out of a FlowX deployment.
The platform has **no-code/full-code capabilities**, meaning applications can be developed in a visual way, available for anyone with a powerful business idea. So we’re talking about business analysts, product managers - people without advanced programming skills, and also experienced developers.
The process visual designer works on [BPMN 2.0 standard](../platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn) - meaning that the learning curve for business analysts or product managers is quite fast. Thus, creating new applications (e.g. onboarding an SME client for banks) or adding new functionality (allow personal data changes in an app) takes only 10 days, instead of 6 to 8 months.
Explore more:
## Microservices
FlowX.AI leverages a suite of microservices to drive its functionality:
* [**FlowX.AI Engine**](#flowx-ai-engine)
* [**FlowX.AI SDKs**](#flowx-ai-sdks)
* [**FlowX.AI Content Management**](#flowx-ai-content-management)
* [**FlowX.AI Scheduler**](#flowx-ai-scheduler)
* [**FlowX.AI License Manager**](#flowx-ai-license-manager)
* [**FlowX.AI Admin**](#flowx-ai-admin)
### FlowX.AI Engine
We call it the engine because it’s a nice analogy, once deployed on an existing stack, FlowX.AI becomes the core of your digital operating model.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Engine%20Diagram%20%281%29.png)
You can use FlowX Engine to do the following:
* create any type of external or internal facing application
* redesign business processes from analog, paper-based ones to fully digital and automated processes
* manage integrations, so you can hook it up to existing CRMs, ERPs, KYC, transaction data and many more
* to read process definitions (if it is connected to the same database as FlowX Admin)
[FlowX.AI Engine](../platform-deep-dive/core-components/flowx-engine) runs the business processes, coordinating integrations and the omnichannel UI. It is a [Kafka-based](./frameworks-and-standards/event-driven-architecture-frameworks/intro-to-kafka-concepts) event-driven platform, that is able to orchestrate, generate and integrate with any type of legacy system, without expensive or risky upgrades.
This is extremely important because often, digital apps used by a bank’s clients, for example, are limited by the load imposed by the core banking system. And the customers see blocked screens and endlessly spinning flywheels. FlowX.AI buffers this load, offering a 0.2s response time, thus the customer never has to wait for data to load.
### FlowX.AI SDKs
SDKs are used in the [Web (Angular)](../../sdks/angular-renderer), and [Android](../../sdks/android-renderer) applications to render the process screens and orchestrate the [custom components](../building-blocks/ui-designer/ui-component-types/root-components/custom).
Explore more:
### FlowX.AI Content Management
This is another Java microservice that enables you to store and manage content. **The go-to place for all taxonomies.** The extension offers a convenient way of managing various content pieces such as lists or content translations. Anything that is under content management is managed by the [CMS backend service](../../setup-guides/cms-setup). To store content, the service will use a MongoDB database (unstructured database). For example, each time you edit an [enumeration](../platform-deep-dive/core-extensions/content-management/enumerations), the FlowX Designer will send an HTTP request to the microservice.
### FlowX.AI Scheduler
If you need to **set a timer on** a process that needs to end after X days, you can use the FlowX Scheduler microservice. It is a service that is able to receive requests (like a reminder application) to remind you in X amount of time to do something.
When you start a process, the process must have an expiry date.
Scheduler microservice communicates with the FlowX Engine through Kafka Event Queue ⇾ it creates a new message (write some data) then will send that message to Kafka (with the scheduler address) → when the reminder time comes up, the scheduler will put back a new message in the Kafka layer with engine's destination (time + ID of the process).
### FlowX.AI License Manager
Used for displaying usage reports related to the FlowX.AI platform within the FlowX Designer, ensuring transparent monitoring and management of the platform.
### FlowX.AI Admin
Used for storing and editing process definitions, FlowX Admin connects to the same database as the FlowX Engine, ensuring consistency in data management.
## FlowX.AI custom plugins
Plugins are bits of functionality that allow you to expand the functionality of the platform - for example, we have the following custom plugins:
* [FlowX.AI Notifications](../platform-deep-dive/plugins/custom-plugins/notifications-plugin/notifications-plugin-overview) plugin
* [FlowX.AI Documents](../platform-deep-dive/plugins/custom-plugins/documents-plugin/documents-plugin-overview) plugin
* [FlowX.AI OCR](../platform-deep-dive/plugins/custom-plugins/ocr-plugin) plugin
* [FlowX.AI Task Management](../platform-deep-dive/plugins/custom-plugins/task-management/task-management-overview) plugin
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/plugins_architecture.png)
## Authorization & Session Manager
We recommend Keycloak, a component that allows you to create users and store credentials. It can be also used for authorization - defining groups, and assigning roles to users.
Every communication that comes from a consumer application, goes through a public entry point (API Gateway). To communicate with this component, the consumer application tries to start a process and the public entry point will check for authentication (Keycloak will send you a token) and the entry point validates it.
## Integrations
Connecting your legacy systems or third-party apps to the FlowX Engine is easily done through [custom integrations](../platform-deep-dive/integrations/integrations-overview). These can be developed using your preferred tech stack, the only requirement is that they connect to Kafka. These could include legacy APIs, custom file exchange solutions, or RPA.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/integrations_architecture.png)
In summary, FlowX.AI offers a robust and versatile architecture that empowers users to create, manage, and integrate applications seamlessly, without the need for extensive coding expertise. Its microservices, SDKs, and plugins work in harmony to drive efficiency and innovation in application development and business process management.
# Intro to BPMN
The core element of the platform is a process. Think of it as a representation of your business use case, for example making a request for a new credit card, placing an online food order, registering your new car or creating an online fundraiser supporting your cause.
To easily design and model process flows, we use the standard **BPMN 2.0** graphical representation.
## What is Business Process Model and Notation (BPMN)?
Business Process Model and Notation (BPMN) is a graphical representation for specifying business processes in a business process model.
It is **the most widely used standard for business process diagrams**. It is intended to be used directly by the stakeholders who design, manage and realize business processes, but at the same time be precise enough to allow BPMN diagrams to be translated into software process components.
This is why we chose it for modeling the process flows.
## BPMN 2.0 elements
A BPMN business process flow is represented as a set of process elements connected by sequences. Here are the most common types of elements:
### Events
Events describe something that happens during the course of a process. There are three main events types: start events, intermediate events, and end events. These three types are also defined as either catching events (they react to a trigger) or throwing events (they are triggered by the process).
![basic event types](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn/events.png)
### Activities
An activity represents a unit of work to be performed by the business process. An activity can be atomic (a task) or can represent a group of more activities (a subprocess).
![various types of activities](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn/activities.png)
### Gateways
Gateways are used to control how a process flows. They act as a decision point that picks which sequence flow should the [**process instance**](../../../building-blocks/process/process-instance) take. This is based on the result of the evaluation of condition(s) specified (in case of exclusive gateways) or they can be used to split a process into more branches (in case of parallel gateways).
![exclusive and parallel gateways](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn/gateways.png)
### Pools and lanes
Pools and lanes are used in order to group the process steps by process participants. To show that certain user roles are responsible for performing specific process steps you can divide the process using lanes.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn/swimlanes_pool.png)
## BPMN basic concepts
Let's get into a bit more details on the main types of BPMN process elements.
### Events
Events are signals that something happens within a process, including its start and end and any interactions with the process environment.
Types of Events:
* Start Events
* End Events
* Intermediate Events
### Start and End events
**Start & End events**
| Start Event Icon | End Event Icon |
| :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn/bpmn-basic-concepts/event_start.png) | ![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn/bpmn-basic-concepts/event_end.png) |
| Event that triggers the process | Event that defines the state that terminates the process |
### Intermediate events
An intermediate event occurs between a start and an end event. It is represented by a circle with a double line, indicating its ability to both catch and throw information.
#### Message events
Message events serve as a means to incorporate messaging capabilities into business process modeling. These events are specifically designed to capture the interaction between different process participants by referencing messages.
### Activities
#### Task
An atomic activity within a process flow, created when the activity cannot be broken down further. A task belongs to one lane.
| User task | Service task |
| :-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn/bpmn-basic-concepts/user_task.png#center) | ![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/business-process-industry-standards/intro-to-bpmn/bpmn-basic-concepts/service_task.png#center) |
| A task that requires human action | A task that uses a web service, automated application, or other kinds of service in completing the task |
#### User Task
A task performed by the user without aid from a business process execution engine or application, requiring a certain action in the application.
#### Service Task
Executed by a business process engine. The task defines a script that the FlowX Engine can interpret and execute, completing when the script finishes. It can also run a [**business rule**](../../../building-blocks/actions/business-rule-action/business-rule-action) on the process data.
### BPMN Subprocesses
In BPMN, a subprocess is a compound activity that represents a collection of other tasks and subprocesses. Generally, we create BPMN diagrams to communicate processes with others. To facilitate effective communications, we really do not want to make a business process diagram too complex. By using subprocesses, you can split a complex process into multiple levels, which allows you to focus on a particular area in a single process diagram.
### Gateways
Gateways allow to control as well as merge and split the **process flow**.
#### Exclusive gateways
In business processes, you typically need to make choices — **business decisions**. The most common type of decision is choosing **either/or**. Exclusive Gateways limit the possible outcome of a decision to a single path, and circumstances choose which one to follow.
#### Parallel gateways
In many cases, you want to split up the flow within your business process. For example the sales and risk departments may examine a new mortgage application at the same time. This reduces the total cycle time for a case. To express parallel flow in BPMN, you use a **parallel gateway**.
| Exclusive gateway (XOR) | Parallel gateway (AND) |
| :---------------------------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------: |
| ![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/business-process-industry-standards/gateway_exclusive.png#center) | ![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/business-process-industry-standards/gateway_parallel.png#center) |
|
defines a decision point
|
no decision making
all outgoing branches are activated
|
**Closing gateway**
* Closes gateways by connecting branches with no logic involved
* The symbol used is determined by the initial gateway type.
* Parallel gateways:
* These gateways wait for all input tokens and merge them into a single token.
* Are aware of all preceding token flows, know the paths selected, and expect tokens from these paths.
## In depth docs
For comprehensive insights into BPMN and its various node types, explore our course at FlowX Academy:
* What's BPMN (Business Process Model Notation) and how does it work?
* How is BPMN used in FlowX?
# Intro to DMN
As we've seen in the previous chapter, Business Process Model and Notation BPMN is used to define business processes as a sequence of activities. If we need to branch off different process paths, we use gateways. These have rules attached to them in order to decide on which outgoing path should the process continue on.
![Process with gateways](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/business-process-industry-standards/process_with_gateways.png)
For more information on how to define DMN gateway decisions, check the [**Exclusive gateway node**](../../../building-blocks/node/exclusive-gateway-node) section.
We needed a convenient way of specifying the **business rules** and we picked two possible ways of writing business rules:
* defining them as DMN decisions
You can now define a DMN Business Rule Action directly in **FlowX Designer** . For more information, check the [**DMN business rule action**](../../../building-blocks/actions/business-rule-action/dmn-business-rule-action) section.
* adding [MVEL](./intro-to-mvel) scripts
### What is Decision Model and Notation (DMN)?
**Decision Model and Notation** (or DMN) is a graphical language that is used to specify business decisions. DMN acts as a translator, converting the code behind complex decision-making into easily readable diagrams.
**The Business Process Model and Notation** is used to create the majority of process models **(BPMN)**. The DMN standard was developed to complement BPMN by providing a mechanism for modeling decision-making represented by a Task within a process model. DMN does not have to be used in conjunction with BPMN, but it is highly compatible.
FLOWX.AI supports [DMN 1.3](https://www.omg.org/spec/DMN/1.3/) version.
### DMN Elements
There are 4 basic elements of the **Decision Model and Notation**:
* [Decision](#decision)
* [Business Knowledge Model](#business-knowledge-model)
* [Input Data](#input-data)
* [Knowledge Source](#knowledge-source)
![Basic DMN Diagram](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-overview/frameworks-and-standards/business-process-industry-standards/dmn_diagram.png)
#### Decision
It’s the center point of a DMN diagram and it symbolizes the action that determines as output the result of a decision.
**Decision service**
A decision service is a high-level decision with well-defined inputs that is made available as a service for invocation. An external application or business process can call the decision service (BPMN).
#### Business Knowledge Model
It portrays a specific knowledge within the business. It stores the origin of the information. Decisions that have the same logic but depend on different sub-input data or sub-decisions use business knowledge models to determine which procedure to follow.
**Example:** a decision, rule, or standard table.
#### Input Data
This is the information used as an input to the normal decision. It’s the variable that configures the result. Input data usually includes business-level concepts or objects relevant to the business.
**Example:** Entering a customer’s tax number and the amount requested in a credit assessment decision.
#### Knowledge Source
It’s a source of knowledge that conveys a kind of legitimacy to the business.
**Example**: policy, legislation, rules.
### DMN Decision Table
A decision table represents decision logic which can be depicted as a table in Decision Model and Notation. It consists of inputs, outputs, rules, and hit policy.
| Decision table elements | |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Inputs | A decision table can have one or more input clauses, that represent the attributes on which the rule should be applied. |
| Outputs | Each entry with values for the input clause needs to be associated with output clauses. The output represents the result that we set if the rules applied to the input are met. |
| Rules | Each rule contains input and output entries. The input entries are the condition and the output entries are the conclusion of the rule. If each input entry (condition) is satisfied, then the rule is satisfied and the decision result contains the output entries (conclusion) of this rule. |
| Hit policy | The hit policy specifies what the result of the decision table is in cases of overlapping rules, for example, when more than one rule matches the input data. |
**Hit Policy examples**
unique result
only one rule will match, or no rule
unique result
the order matter
continues with the first rule that matches
rule outputs are prioritized
rules may overlap, but only match with the highest output priority counts
unique results
multiple rules can be satisfied
all satisfied rules must generate the same output, otherwise the rule is violated
multiple results
the rules are evaluated in the order they are defined
the satisfied rules can generate different outputs
multiple results
the rules are evaluated in an arbitrary order
the satisfied rules can generate different outputs
can contain aggregators - that apply an aggregation operation on all the outputs resulted from the rule evaluation:
SUM
MIN
MAX
COUNT
### DMN Model
DMN defines an XML schema that allows DMN models to be used across multiple DMN authoring platforms.
You can use this XML example with FLOWX Designer, adding it to a Business Rule Action - using an MVEL script. Then you can switch to DMN if you need to generate a graphical representation of the model.
### Using DMN with FLOWX Designer
As mentioned previously, DMN can be used with FLOWX Designer for the following scenarios:
* For defining gateway decisions, using [exclusive gateways](../../../building-blocks/node/exclusive-gateway-node)
* For defining [business rules actions](../../../building-blocks/actions/business-rule-action/business-rule-action) attached to a [task node](../../../building-blocks/node/task-node)
### In depth docs
# Intro to MVEL
We can also specify the business rules logic using MVEL scripts. As opposed to DMN, with MVEL you can create complex business rules with multiple parameters and sub-calculations.
## What is MVEL?
**MVFLEX Expression Language** (MVEL) is an expression language with a syntax similar to the Java programming language. This makes it relatively easy to use in order to define more complex business rules and that cannot be defined using DMN.
The runtime allows MVEL expressions to be executed either interpretively, or through a pre-compilation process with support for runtime byte-code generation to remove overhead. We pre-compile most of the MVEL code in order to make sure the process flow advances as fast as possible.
## Example
```java
if( input.get("user.credit_score") >= 700 ) {
output.setNextNodeName("TASK_SET_CREDIT_CARD_TYPE_PREMIUM");
} else {
output.setNextNodeName("TASK_SET_CREDIT_CARD_TYPE_STANDARD");
}
```
## In depth docs
# Intro to Elasticsearch
Elasticsearch itself is not inherently event-driven, it can be integrated into event-driven architectures or workflows. External components or frameworks detect and trigger events, and Elasticsearch is utilized to efficiently index and make the event data searchable.
This integration allows event-driven systems to leverage Elasticsearch’s powerful search and analytics capabilities for real-time processing and retrieval of event data.
## What is Elasticsearch?
Elasticsearch is a powerful and highly scalable open-source search and analytics engine built on top of the [Apache Lucene](https://lucene.apache.org/) library. It is designed to handle a wide range of data types and is particularly well-suited for real-time search and data analysis use cases. Elasticsearch provides a distributed, document-oriented architecture, making it capable of handling large volumes of structured, semi-structured, and unstructured data.
## How it works?
At its core, Elasticsearch operates as a distributed search engine, allowing you to store, search, and retrieve large amounts of data in near real-time. It uses a schema-less JSON-based document model, where data is organized into indices, which can be thought of as databases. Within an index, documents are stored, indexed, and made searchable based on their fields. Elasticsearch also provides powerful querying capabilities, allowing you to perform complex searches, filter data, and aggregate results.
## Why it is useful?
One of the key features of Elasticsearch is its distributed nature. It supports automatic data sharding, replication, and node clustering, which enables it to handle massive amounts of data across multiple servers or nodes. This distributed architecture provides high availability and fault tolerance, ensuring that data remains accessible even in the event of hardware failures or network issues.
Elasticsearch integrates with various programming languages and frameworks through its comprehensive RESTful API. It also provides official clients for popular languages like Java, Python, and JavaScript, making it easy to interact with the search engine in your preferred development environment.
## Indexing & sharding
### Indexing
Indexing refers to the process of adding, updating, or deleting documents in Elasticsearch. It involves taking data, typically in JSON format, and transforming it into indexed documents within an index. Each document represents a data record and contains fields with corresponding values. Elasticsearch uses an inverted index data structure to efficiently map terms or keywords to the documents containing those terms. This enables fast full-text search capabilities and retrieval of relevant documents.
### Sharding
Sharding, on the other hand, is the practice of dividing index data into multiple smaller subsets called shards. Each shard is an independent, self-contained index that holds a portion of the data. By distributing data across multiple shards, Elasticsearch achieves horizontal scalability and improved performance. Sharding allows Elasticsearch to handle large amounts of data by parallelizing search and indexing operations across multiple nodes or servers.
Shards can be configured as primary or replica shards. Primary shards contain the original data, while replica shards are exact copies of the primary shards, providing redundancy and high availability. By having multiple replicas, Elasticsearch ensures data durability and fault tolerance. Replicas also enable parallel search operations, increasing search throughput.
Sharding offers several advantages. It allows data to be distributed across multiple nodes, enabling parallel processing and faster search operations. It also provides fault tolerance, as data is replicated across multiple shards. Additionally, sharding allows Elasticsearch to scale horizontally by adding more nodes and distributing the data across them.
The number of shards and their allocation can be determined during index creation or modified later. It is important to consider factors such as the size of the dataset, hardware resources, and search performance requirements when deciding on the number of shards.
For more details, check Elasticsearch documentation:
## Leveraging Elasticsearch for advanced indexing with FlowX.AI
The integration between FlowX.AI and Elasticsearch involves the indexing of specific keys or data from the [**UI Designer**](../../../building-blocks/ui-designer/) to [process definitions](../../../building-blocks/process/process-definition). The indexing process is initiated by the [FlowX Engine](../../../platform-deep-dive/core-components/flowx-engine) which sends the data to Elasticsearch, where data is then indexed in the "process\_instance" index.
There are two methods available for indexing data: Kafka and HTTP.
* **Kafka**: Data is sent to be indexed through a Kafka topic using the new strategy. Deploy the [**Kafka Connect with Elasticsearch Sink Connector**](../../../../setup-guides/flowx-engine-setup-guide/configuring-elasticsearch-indexing/elasticsearch-indexing.mdx#kafka-connect) in the infrastructure to utilize this method.
* **HTTP**: Data is indexed by establishing a direct connection from the FlowX Engine to Elasticsearch. Use HTTP calls to connect directly from the FlowX Engine to Elasticsearch.
To ensure effective indexing of process instances' details, a crucial step involves defining a mapping that specifies how Elasticsearch should index the received messages. This mapping is essential as the process instances' details often have specific formats. The process-engine takes care of this by automatically creating an index template during startup if it doesn't already exist. The index template acts as a blueprint, providing Elasticsearch with the necessary instructions on how to index and organize the incoming data accurately. By establishing and maintaining an appropriate index template, the integration between FLOWX.AI and Elasticsearch can seamlessly index and retrieve process instance information in a structured manner.
### Kafka transport strategy
[Kafka](../event-driven-architecture-frameworks/intro-to-kafka-concepts) transport strategy implies process-engine sending messages to a Kafka topic whenever there is data from a process instance to be indexed. Kafka Connect is then configured to read these messages from the topic and forward them to Elasticsearch for indexing.
This approach offers benefits such as fire-and-forget communication, where the process-engine no longer needs to spend time handling indexing requests. By decoupling the process-engine from the indexing process and leveraging Kafka as a messaging system, the overall system becomes more efficient and scalable. The process-engine can focus on its core responsibilities, while Kafka Connect takes care of transferring the messages to Elasticsearch for indexing.
To optimize indexing response time, Elasticsearch utilizes multiple indices created dynamically by the Kafka Connect connector. The creation of indices is based on the timestamp of the messages received in the Kafka topic. The frequency of index creation, such as per minute, hour, week, or month, is determined by the timestamp format configuration of the Kafka connector.
It's important to note that the timestamp used for indexing is the process instance's start date. This means that subsequent updates received for the same object will be directed to the original index for that process instance. To ensure proper identification and indexing, it is crucial that the timestamp of the message in the Kafka topic corresponds to the process instance's start date, while the key of the message aligns with the process instance's UUID. These two elements serve as unique identifiers for determining the index in which a process instance object was originally indexed.
For more details on how to configure process instance indexing through Kakfa transport, check the following section:
# Intro to Kafka concepts
Apache Kafka is an open-source distributed event streaming platform that can handle a high volume of data and enables you to pass messages from one end-point to another.
Kafka is a unified platform for handling all the real-time data feeds. Kafka supports low latency message delivery and gives a guarantee for fault tolerance in the presence of machine failures. It can handle many diverse consumers. Kafka is very fast, and performs 2 million writes/sec. Kafka persists all data to the disk, which essentially means that all the writes go to the page cache of the OS (RAM). This makes it very efficient to transfer data from a page cache to a network socket.
### Benefits of using Kafka
* **Reliability** − Kafka is distributed, partitioned, replicated, and fault-tolerant
* **Scalability** − Kafka messaging system scales easily without downtime
* **Durability** − Kafka uses Distributed commit log which means messages persist on disk as fast as possible
* **Performance** − Kafka has high throughput for both publishing and subscribing messages. It maintains a stable performance even though many TB of messages are stored.
## Key Kafka concepts
### Events
Kafka encourages you to see the world as sequences of events, which it models as key-value pairs. The key and the value have some kind of structure, usually represented in your language’s type system, but fundamentally they can be anything. Events are immutable, as it is (sometimes tragically) impossible to change the past.
### Topics
Because the world is filled with so many events, Kafka gives us a means to organize them and keep them in order: topics. A topic is an ordered log of events. When an external system writes an event to Kafka, it is appended to the end of a topic.
In FLOWX.AI, Kafka handles all communication between the [FlowX Engine](../../../platform-deep-dive/core-components/flowx-engine) and external plugins and integrations. It is also used for notifying running process instances when certain events occur. More information about KAFKA configuration on the section below:
### Producer
A producer is an external application that writes messages to a Kafka cluster, communicating with the cluster using Kafka’s network protocol.
### Consumer
The consumer is an external application that reads messages from Kafka topics and does some work with them, like filtering, aggregating, or enriching them with other information sources.
## In-depth docs
# Intro to Kubernetes
Kubernetes is an open-source container orchestration platform that automates many of the manual processes involved in containerized application deployment, management, and scaling.
The purpose of Kubernetes is to orchestrate containerized applications to run on a cluster of hosts. **Containerization** enables you to deploy multiple applications using the same operating system on a single virtual machine or server.
Kubernetes, as an open platform, enables you to build applications using your preferred programming language, operating system, libraries, or messaging bus. To schedule and deploy releases, existing continuous integration and continuous delivery (CI/CD) tools can be integrated with Kubernetes.
## Benefits of using Kubernetes
* A proper way of managing containers
* High availability
* Scalability
* Disaster recovery
## Key Kubernetes Concepts
### Node & PODs
A Kubernetes node is a machine that runs containerized workloads as part of a Kubernetes cluster. A node can be a physical machine or a virtual machine, and can be hosted on-premises or in the cloud.
A pod is composed of one or more containers that are colocated on the same host and share a network stack as well as other resources such as volumes. Pods are the foundation upon which Kubernetes applications are built.
Kubernetes uses pods to run an instance of your application. A pod represents a single instance of your application.
Pods are typically ephemeral, disposable resources. Individually scheduled pods miss some of the high availability and redundancy Kubernetes features. Instead, pods are deployed and managed by Kubernetes *Controllers*, such as the Deployment Controller.
### Service & Ingress
**Service** is an abstraction that defines a logical set of pods and a policy for accessing them. In Kubernetes, a Service is a REST object, similar to a pod. A Service definition, like all REST objects, can be POSTed to the API server to create a new instance. A Service object's name must be a valid [RFC 1035](https://www.ietf.org/rfc/rfc1035.txt) label name.
**Ingress** is a Kubernetes object that allows access to the Kubernetes services from outside of the Kubernetes cluster. You configure access by writing a set of rules that specify which inbound connections are allowed to reach which services. This allows combining all routing rules into a single resource.
**Ingress controllers** are pods, just like any other application, so they’re part of the cluster and can see and communicate with other pods. An Ingress can be configured to provide Services with externally accessible URLs, load balance traffic, terminate SSL / TLS, and provide name-based virtual hosting. An Ingress controller is in charge of fulfilling the Ingress, typically with a load balancer, but it may also configure your edge router or additional frontends to assist with the traffic.
FlowX.AI offers a predefined NGINX setup as Ingress Controller. The [NGINX Ingress Controller](https://www.nginx.com/products/nginx-ingress-controller/) works with the [NGINX](https://www.nginx.com/resources/glossary/nginx/) web server (as a proxy). For more information, check the below sections:
### ConfigMap & Secret
**ConfigMap** is an API object that makes it possible to store configuration for use by other objects. A ConfigMap, unlike most Kubernetes objects with a spec, has `data` and `binaryData` fields. As values, these fields accept key-value pairs. The `data` field and `binaryData` are both optional. The data field is intended to hold UTF-8 strings, whereas the `binaryData` field is intended to hold binary data as base64-encoded strings.
The name of a ConfigMap must be a valid [DNS subdomain name](https://www.ietf.org/rfc/rfc1035.txt).
**Secret** represents an amount of sensitive data, such as a password, token, or key. Alternatively, such information could be included in a pod specification or a container image. Secrets are similar to ConfigMaps but they are designed to keep confidential data.
### **Volumes**
A Kubernetes volume is a directory in the orchestration and scheduling platform that contains data accessible to containers in a specific pod. Volumes serve as a plug-in mechanism for connecting ephemeral containers to persistent data stores located elsewhere.
### **Deployment**
A deployment is a collection of identical pods that are managed by the Kubernetes Deployment Controller. A deployment specifies the number of pod replicas that will be created. If pods or nodes encounter problems, the Kubernetes Scheduler ensures that additional pods are scheduled on healthy nodes.
Typically, deployments are created and managed using `kubectl create` or `kubectl apply`. Make a deployment by defining a manifest file in YAML format.
## Kubernetes Architecture
Kubernetes architecture consists of the following main parts:
* Control Plane (master)
* kube-apiserver
* etcd
* kube-scheduler
* kube-controller-manager
* cloud-controller-manager
* Node components
* kubelet
* kube-proxy
* Container runtime
## Install tools
### kubectl
`kubectl` makes it possible to run commands against Kubernetes clusters using the `kubectl` command-line tool. `kubectl` can be used to deploy applications, inspect and manage cluster resources, and inspect logs. See the `kubectl` [reference documentation ](https://kubernetes.io/docs/reference/kubectl/)for more information.
### kind
`kind` command makes it possible to run Kubernetes on a local machine. As a prerequisite, Docker needs to be installed and configured. What `kind` is doing is to run local Kubernetes clusters using Docker container “nodes”.
## In depth docs
# Intro to NGINX
NGINX is a free, open-source, high-performance web server with a rich feature set, simple configuration, and low resource consumption that can also function as a reverse proxy, load balancer, mail proxy, HTTP cache, and many other things.
### How NGINX is working?
NGINX allows you to hide a server application's complexity from a front-end application. It uses an event-driven, asynchronous approach to create a new process for each web request, with requests handled in a single thread.
### Using NGINX with FlowX Designer
[**The NGINX Ingress Controller for Kubernetes**](https://kubernetes.github.io/ingress-nginx/) - `ingress-nginx` is an ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.
Ingress allows you to route requests to services based on the host or path of the request, centralizing a number of services into a single entry point.
The [ingress resource](https://www.nginx.com/products/nginx-ingress-controller/nginx-ingress-resources/) simplifies the configuration of **SSL/TLS** **termination**, **HTTP load-balancing**, and **layer routing**.
For more information, check the following section:
#### Integrating with FlowX Designer
FlowX Designer is using NGINX ingress controller for the following actions:
1. For routing calls to plugins
2. For routing calls to the [FlowX Engine](../../../platform-deep-dive/core-components/flowx-engine):
* Viewing current instances of processes running in the FlowX Engine
* Testing process definitions from the FlowX Designer - route the API calls and SSE communications to the FLOWX engine backend
* Accessing REST API of the backend microservice
3. For configuring the Single Page Application (SPA) - FlowX Designer SPA will use the backend service to manage the platform via REST calls
In the following section, you can find a suggested NGINX setup, the one used by FlowX.AI:
### Installing NGINX Open Source
For more information on how to install NGINX Open Source, check the following guide:
# Intro to Redis
Redis is a fast, open-source, in-memory key-value data store that is commonly used as a cache to store frequently accessed data in memory so that applications can be responsive to users. It delivers sub-millisecond response times enabling millions of requests per second for applications.
It is also used as a Pub/Sub messaging solution, allowing messages to be passed to channels and for all subscribers to that channel to receive that message. This feature enables information to flow quickly through the platform without using up space in the database as messages are not stored.
Redis offers a primary-replica architecture in a single node primary or a clustered topology. This allows you to build highly available solutions providing consistent performance and reliability. Scaling the cluster size up or down is done very easily, this allows the cluster to adjust to any demands.
## In depth docs
# Applications
Applications group all resources and dependencies needed to implement an use case.
![FlowX.AI Designer](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/designer_new.png)
## Overview
An application groups resources that represent a project's entire lifecycle. It's not just a collection of processes; it's an organized workspace containing all dependencies required for that project, from themes and templates to integrations and other resources. Applications enable you to:
* Create Versions of an application to manage changes over time.
* Deploy Builds for consistent environments.
* Organize Resources to ensure clarity and reduce errors.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/application_processes.png)
## Core features
Applications provide a single view of all resources referenced in a process, enabling you to manage everything from one central workspace. This approach reduces context-switching and keeps configuration focused.
Applications support the definition of multiple locales, allowing for easy handling of regional content and settings. Enumerations, substitution tags, and media can be localized to meet specific environment requirements.
Applications support a robust versioning mechanism that tracks changes to processes, resources, and configurations. The Application Version captures a snapshot of all included resources, while the Build consolidates everything into a deployable package. Each build contains only one version of an application, ensuring a consistent deployment.
Applications can leverage Libraries, which act similarly to applications but are designed for resource sharing across projects. A library can contain reusable components like processes or templates that can be included in other applications. Dependencies between applications and libraries are managed carefully to ensure compatibility.
## Config
Config mode is the environment where you set up, adjust, and manage your application's resources, processes, and configurations. It's the workspace where you fine-tune every aspect of the application before it's ready for deployment. Think of it as the design phase, where the focus is on setup, organization, and preparation.
## Application components
### Processes
* **Processes**: Process definitions that drive the application's core functionality and share an application's resources.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/process_definition.png)
* **Subprocesses**: Processes that can be invoked within the main process. These can either be part of the same application or imported from a library set as a dependency for the application.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/application_subprocess.gif)
### Content Management System (CMS)
* **Enumerations**: Predefined sets of options or categories used across the application. These are useful for dropdown menus, filters, and other selection fields.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/enums_app.png)
* **Substitution tags**: Here you have system predefined substitutin tags and dynamic placeholders that allow for personalized content, such as user-specific data in notifications or documents.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/react/subst_tags.png)
* **Media Library**: A collection of images, videos, documents, and other media assets accessible within the application.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/react/media_library_app.png)
### Task Management
* **Views**: Views are configurable interfaces that present task-related data based on specific business process definitions. They allow users to create tailored visualizations of task information, utilizing filters, sorting options, and custom parameters to focus on relevant data.
* **Hooks**: Custom scripts or actions triggered at specific points in a task's lifecycle.
* **Stages**: The phases that a task goes through within a workflow (e.g., Pending, In Progress, Completed).
* **Allocation Rules**: Criteria for assigning tasks within workflows based on predefined rules (e.g., user roles, availability).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/react/app_task_manager.gif)
### Integrations
* **Systems (API Endpoints)**: Configurations for connecting to external systems via REST APIs. These endpoints facilitate data exchange and integration with third-party platforms.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/systems_overview.png)
* **Workflows**: Workflows are configurable sequences of tasks and decision nodes that automate data processing, system interactions, and integrations, enabling efficient communication and functionality across different applications and services.
### Dependencies
Dependencies refer to the relationship between an application and the external resources it relies on, typically housed in a **Library**. These dependencies allow applications to access and utilize common assets, like processes, enumerations, or media files, from another project without duplicating content. By managing dependencies effectively, organizations can ensure that their applications remain consistent, efficient, and modular, streamlining both development and maintenance.
* **Libraries**: A library is a special type of project that stores reusable resources. Applications can declare a library as a dependency to gain access to the resources stored within it.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/react/libraries_dependencies.gif)
* **Build-Specific Dependencies**: When an application adds a library as a dependency, it does so by referencing a specific build of that library
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/react/build_library.png)
This build-specific dependency ensures that changes in the library do not automatically propagate to the dependent applications, providing stability and control over which version is in use.
* **Versioning**: Dependencies are versioned, meaning an application can rely on a particular version of a library's build. This versioning capability allows applications to remain insulated from future changes until they are ready to update to a newer version.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/library_build.gif)
Read more about libraries by accessing this section
***
### Configuration Parameters
Configuration Parameters are essential components that allow applications to be dynamic, flexible, and environment-specific. They provide a way to manage variables and values that are likely to change based on different deployment environments (e.g., Development, QA, Production), without requiring hardcoded changes within the application. This feature is particularly useful for managing sensitive information and environment-specific settings.
* **Set environment-specific values**: Tailor your application's behavior depending on the target environment.
* **Store sensitive data securely**: Store API keys, passwords, or tokens securely using environment variables.
* **Centralize settings**: Manage common values in one place, making it easier to update settings across multiple processes or integrations.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/config_params.png)
***
### Application level settings
#### Name
The name of the application, which serves as the main identifier for your project within FlowX AI. This name is used to categorize and manage all associated resources.
#### Type
You have two options:
* **Application**: A standard project that will contain processes, resources, integrations, and other elements.
* **Library**: A reusable set of resources that can be referenced by multiple applications.
#### Platform type
You can specify the platforms for the application:
* **Omnichannel**: The application will support web, mobile, and other platforms.
* **Web**: The application will be restricted to web-based access.
* **Mobile**: The application will only be accessible via mobile platforms.
#### Default theme
Choose a theme to apply a consistent look and feel across the application. Themes manage colors, fonts, and UI elements for a unified visual experience. In the screenshot, "FlowXTheme" is selected as the default.
#### Number formatting
* **Min Decimals** and **Max Decimals**: Configure how numbers are displayed in the application by setting the minimum and maximum decimal points. This helps ensure data consistency when dealing with financial or scientific information.
* **Date Format**: Choose the format for displaying dates (e.g., short or long formats) to ensure the information is localized or standardized based on your application's requirements.
* **Currency Format**: Set whether currency is displayed using the ISO code (e.g., USD) or using a symbol (\$). This affects how financial information is presented across the application.
#### Languages
This section allows you to manage languages supported by the application.
* **Default Language**: You can set one language as the default, which will be the primary language for users unless they specify otherwise. In the screenshot, English (EN) is set as the default language.
* **Add Multiple Languages**: This enables multi-language support. Substitution tags and enumerations can be localized to provide a better experience for users in different regions.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/app_settings.png)
Fore more information about localization and interanationalization, check the following section:
## Creating and managing applications
An application's lifecycle includes the following stages:
* Start by defining the name and type (Omnichannel, Web, Mobile).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/create_first_app.gif)
* Set up initial configurations like themes, languages, and formats.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/configure_initial.gif)
* Inherit system-wide design assets like themes or fonts.
* Add processes, define templates, set up enumerations, and manage integrations.
* Configure environment-specific parameters like API URLs or passwords using environment variables.
* Reuse resources from libraries by setting up dependencies, allowing access to shared content.
* Submit your configured resources to a version.
* Once finalized, a version can be committed for stability.
* Create a build from a version when ready to deploy to a different environment.
* A build packages the application version for a consistent deployment.
* Each build is immutable and cannot be modified once created, ensuring runtime stability.
## FAQs
To create a new version, navigate to the application dashboard, select "Create New Version," and make any changes. Once finalized, commit the version to lock it.
Creating a build captures a snapshot of the application version, consolidating resources into a single deployable package. Builds are immutable and cannot be edited once created.
Yes, applications support multiple locales. You can define regional settings, such as date formats and currencies, to cater to different environments.
When modifying shared resources, FlowX AI enforces versioning. You can track which processes or resources are affected by a change and revert if necessary.
An Application Version is a snapshot of resources and configurations that can be modified, tracked, and rolled back. A Build is a deployable package created from a committed version, and it is immutable once deployed.
Go to your application settings, navigate to dependencies, and select the desired library. Choose the build you want to use, and its resources will be accessible within your application.
No, a build is immutable. To make changes, modify the application version, create a new version, and deploy a new build.
Each resource has a Resource Definition ID (consistent across versions) and a Resource Version ID (specific to each version). These ensure that the correct version of a resource is used at runtime.
**A:** Dependencies allow applications to share and reuse resources like processes, enumerations, and templates that are stored in libraries. By setting a library as a dependency, an application can access the resources it needs without duplication, fostering modular development.
**A:** Dependencies are version-controlled, meaning you can choose specific library builds to ensure stability. Each build version of a library captures the state of its resources, allowing applications to lock onto a particular version until they are ready to upgrade.
**A:** Yes, you can remove a dependency from the application. Go to the **Dependencies** section in the application workspace and select the dependency you want to remove. However, make sure that no critical resources in the application rely on that dependency.
**A:** Circular dependencies occur when two libraries depend on each other. This can lead to conflicts and unexpected behavior. It's best to keep dependencies modular and avoid tightly coupling libraries to prevent such issues.
**A:** Use a controlled environment like Dev or UAT to test new builds of the library before updating the dependency in your main application. This allows you to validate changes and ensure they don't negatively impact your application.
**A:** When a dependency is updated to a newer build, any resources that were modified in the library will reflect the latest version. Applications have control over when to update, so older versions remain stable until the dependency is manually updated.
# Libraries
Libraries are specialized projects that serve as reusable containers for resources that can be shared across multiple applications.
![FlowX.AI Designer](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/designer_new.png)
Unlike regular applications, which are intended for creating and managing specific workflows, libraries are designed to house common elements—like processes, enumerations, and media assets—that other applications may rely upon.
This makes libraries a cornerstone for establishing consistency and efficiency across various projects.
#### Key features
* **Resource Sharing**:
* Libraries facilitate the reuse of resources like processes, enumerations, and media assets across different applications. This allows for a more modular design, where commonly used elements are stored centrally.
* Resources in a library can be included in other applications by setting the library as a dependency.
* **Dependencies**:
* An application can add a library as a dependency, meaning it will have access to the resources stored in that library.
* Dependencies are managed through builds; each application can choose a specific version (build) of a library to depend on. This ensures that updates in the library do not automatically impact all applications unless intended.
* **Versioning**:
* Just like applications, libraries can be versioned. Each version of a library captures the state of its resources at a specific point in time.
* This versioning allows applications to lock dependencies to specific library versions, providing stability and predictability during runtime.
#### Managing libraries
1. **Creating a Library**:
* Libraries can be created similarly to applications. The creation process involves setting a name, defining the resources it will contain, and managing its configuration settings.
* Once created, resources can be added to the library incrementally, allowing for iterative development.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/docs_library.png)
2. **Managing Library Resources**:
* Users can create, edit, delete, or version resources within the library. Resources include processes, enumerations, media files, and other elements that can be referenced by applications.
* A library can have multiple versions, each capturing the resource state. This allows backward compatibility with older application builds relying on specific library versions.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/library_build.png)
3. **Adding Library Dependencies to Applications**:
* In the application workspace, libraries can be set as dependencies under the **Dependencies** section.
* Users can select which build version of the library to reference. This allows applications to control how library changes affect them by choosing to update the dependency to newer library builds as needed.
Libraries can be added as a depedency only to Work-In-Progress (WIP) application versions.
#### Library build process
* **Builds in Libraries** are tagged versions, allowing a snapshot of the current library resources to be used by applications. Each build in a library captures the state of all its resources.
* Applications referencing a library can specify the exact build to use, ensuring that changes to the library do not inadvertently impact dependent applications.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/application/library_build.gif)
#### Use cases
1. **Centralized Resource Management**:
* Organizations that need to maintain a standard set of processes or other resources can use libraries to store these centrally. Each application can then use the same library resources, ensuring consistency.
2. **Version-Controlled Dependency Management**:
* By utilizing builds and versioning within libraries, applications can safely depend on specific versions, reducing the risk of unexpected behavior due to resource updates.
3. **Streamlining Updates Across Applications**:
* When an update is required across multiple applications that rely on the same resources, it can be done by updating the library and releasing a new build. Each application can then choose to update to the new build when ready.
## Best practices
* Use libraries for resources that need to be standardized across multiple applications.
* Carefully manage dependencies and choose specific builds to avoid unexpected runtime behavior.
* Leverage versioning to maintain a clear history of resource changes, allowing rollbacks if necessary.
Libraries thus play a crucial role in modularizing the development process, enabling reusable and maintainable components across different applications. This leads to improved project organization, reduced redundancy, and simplified resource management.
## FAQs
**A:** In the application workspace, navigate to the **Dependencies** section. Select the library you want to add, choose the specific build version, and confirm. Once added, the resources from that library will be available for use within the application.
**A:** When a resource in a library is updated, dependent applications will not be affected unless the library build they reference is updated to a newer version. This allows applications to maintain stability by controlling when updates are adopted.
**A:** Yes, an application can depend on multiple libraries. Each library can be referenced by selecting the desired build version. However, ensure that dependencies are managed carefully to avoid conflicts or redundancy.
**A:** Before switching to a newer library build, test it in a development or staging environment to ensure compatibility. If everything functions as expected, update the application’s dependency to the new build in the **Dependencies** section.
# Resources
Overview of Global and Application Resources, including their usage, dependencies, promotion, and configuration within the platform.
## Overview
Resources are categorized into **Global Resources** and **Application Resources**. Each type of resource has unique characteristics that determine its usage, dependencies, and how it’s promoted between environments. Understanding these differences is crucial for efficient application development, management, and deployment.
***
## Global Resources
Global Resources are elements designed to be **reused across multiple applications** or business contexts. These resources are often organized within **libraries**, enabling consistency and efficiency by providing a central repository of shared components.
### Key Characteristics of Global Resources
* **Reusability**: Global Resources are created with the intention of reuse. They include common design elements, themes, fonts, and other assets that can be referenced by multiple applications.
* **Dependency Rules**: Libraries, which store Global Resources, **cannot have dependencies** on other libraries or applications. This ensures that Global Resources remain standalone, maximizing their adaptability across various business cases.
* **Application Independence**: These resources are not application-specific, making them versatile for broad use cases. Applications can reference libraries without requiring modifications to the core resources.
### Examples of Global Resources
1. **Design Assets**
* **Themes**: Standardized themes provide a consistent look and feel across applications.
* **System Assets**: Common media elements stored in the media library (e.g., images, icons) for theme customization.
* **Fonts**: Font families that can be reused to maintain branding across different applications.
2. **Plugins**
* **Languages**:
* **Notification and Document Templates**: Managed at the global level, although they may be versioned for future releases.
* **Out of office**:
3. **General settings**
* **Users**:
* **Roles**:
* **Groups**:
* **Audit Log**:
4. **Libraries**: Organized resource containers including **enumerations**, **substitution tags**, and **CMS components** for multi-application use.
### Promotion of Global Resources
* **Promotion Workflow**: Global Resources within libraries are promoted separately from applications. They are typically **imported into the Designer UI** in target environments as part of their own libraries.
* **Configuration Management**: When libraries are promoted, existing configurations, such as **generic parameters**, are replaced by application-level configurations in the target environment.
***
## Application Resources
Application Resources are resources specific to a particular **business use case**. Unlike Global Resources, these resources are confined to the context of the application they belong to, allowing for tailored configurations and dependencies on one or more libraries.
### Key Characteristics of Application Resources
* **Business Specificity**: Application Resources are tailored to a specific application, addressing unique business requirements.
* **Dependencies on Libraries**: Applications can reference libraries to access Global Resources, allowing for customization and adaptability.
* **Configurability**: Application-specific configurations are defined at the development stage, and values can be updated as needed in upper environments through environment variables or direct parameter overrides.
### Examples of Application Resources
1. **Processes**:
2. **CMS Components**: Custom CMS enumerations, substitution tags, and media items unique to the application.
3. **Task Management**:
* **Views**: Configurable interfaces to display task-related data based on process definitions.
* **Hooks**: Users with task management permissions can create hooks to trigger specific process instances, such as sending notifications when events occur.
* **Stages**: Stages that allow task progression through different statuses.
* **Allocation rules**: Define how tasks are assigned to users or teams.
4. **Integration Designer**:
* **Systems**: A system is a collection of resources—endpoints, authentication, and variables—used to define and run integration workflows.
* **Workflows**: A workflow defines a series of tasks and processes to automate system integrations. Within the Integration Designer, workflows can be configured using different components to ensure efficient data exchange and process orchestration.
5. **Configuration Parameters**: Application-specific rendering settings like `applicationUuid`, `locale`, `language`, and `process parameters`.
6. **Application Builds**: Builds represent finalized versions of the application, including all associated metadata and linked library versions.
7. **Application Settings**: Configure various aspects of a project like platform type, default theme, formatting, etc.
### Promotion of Application Resources
* **Promotion Workflow**: Only **builds** (not commits) are eligible for promotion to upper environments. Builds are exported from the development environment and imported into target environments via the Designer UI.
* **Design Asset Handling**: During import, any referenced design assets are created if absent but are not updated, ensuring consistency in upper environments.
* **Configuration Parameters Overrides**: Upper environment values replace development defaults, and these can be managed through environment variables, enabling flexibility without direct development environment access.
***
## Summary Table
| Resource Type | Reusability | Dependency Management | Promotion Method | Configuration Management |
| ------------------------- | -------------------- | ----------------------- | ---------------------------------- | ------------------------------------ |
| **Global Resources** | Multi-application | No dependencies allowed | Libraries imported via Designer UI | Generic parameters managed centrally |
| **Application Resources** | Application-specific | Can depend on libraries | Builds promoted via Designer UI | Environment-specific overrides |
***
# Active policy
The Active policy defines the strategy for selecting which build version of an application is active in the runtime environment.
This feature plays a key role in ensuring that the correct [**application**](../managing-applications/application) configuration is executed, managing how processes interact with specific builds, and controlling how changes propagate through environments. The active policy determines which application version is currently "live" and dictates how updates and new builds are adopted.
#### Key Concepts
1. **Active Build**
* The active build is the currently selected version of an application that is deployed in the runtime environment. It is the version that all process executions will reference when initiated.
* Only one build can be active per environment at a time, ensuring clarity and stability.
2. **Policy Types**
* **Draft Policy**: Configures the application to run the latest draft version. This is useful for environments like development or testing, where ongoing changes need to be reflected in real-time without committing to a finalized version.
* **Fixed Version Policy**: Points to a specific, committed build version of the application. This policy ensures that no unexpected updates are introduced and is typically used for UAT and Production environments.
3. **Version Control and Flexibility**
* Active Policy provides flexibility by allowing the user to switch between different build versions based on the environment's needs. This makes it easy to test, stage, and deploy without affecting production stability.
* By managing the policy, you control which changes go live and when, allowing for seamless testing and controlled rollouts.
#### Workflow for Managing Active Policy
1. **Setting the Active Policy**
* Navigate to the application's settings in the FlowX AI Designer.
* Go to the **Active Policy** section, where you can manage the behavior of the active build.
* Choose the policy type—either Draft or Fixed Version—to specify how builds should be managed in the environment.
2. **Managing Draft Policy**
* When selecting the Draft Policy, the application will always refer to the latest draft build created on the chosen branch.
* This allows for continuous development and testing without having to commit to a specific version. However, it is important to use this policy only in non-production environments.
3. **Selecting a Fixed Version Policy**
* For stability, you can set the active policy to a Fixed Version. This locks the application to a specific build, ensuring that no changes are applied unless a new build is created and explicitly set as active.
* This policy is ideal for UAT, QA, and Production environments where stability and consistency are key.
4. **Publishing a New Build as Active**
* Once a new build is ready and verified, you can update the active policy to point to the new build version.
* The change takes effect immediately, making the new build the active version for all processes and interactions in that environment.
#### Key Features
1. **Environment-Specific Control**
* Each environment (Development, UAT, Production) can have its own active policy, providing control over what version of the application is running in each stage of the development lifecycle.
2. **Rollback Capability**
* The active policy makes it easy to revert to a previous build if needed. By simply changing the active policy, you can switch back to an earlier stable version, minimizing disruption.
3. **Branch-Specific Management**
* Active Policy supports management by branch. This means you can maintain different policies for different branches, allowing separate versions of the application to be active in separate environments.
4. **Process Consistency**
* By defining which build version is active, the active policy ensures that all processes reference the correct resources. This eliminates inconsistencies and ensures that the expected behavior is maintained throughout the application's lifecycle.
#### Benefits
* **Controlled Rollouts**: Provides a clear pathway to manage when updates go live, ensuring that no untested changes accidentally reach production.
* **Simplified Management**: Offers a straightforward way to manage the complexity of multiple application versions across various environments.
* **Enhanced Stability**: Reduces the risk of runtime errors by maintaining control over the specific version that is active.
* **Efficient Testing**: Allows for easy testing of new changes without affecting the stability of other environments by using Draft Policy.
#### Managing Active Policy
1. **Configuring a New Active Policy**
* In the application settings, navigate to the **Active Policy** tab.
* Choose between **Draft** or **Fixed Version**.
* If selecting Fixed Version, pick the specific build from the dropdown list.
2. **Changing the Active Build**
* To update which build is active, modify the policy to reference a new build.
* Confirm the changes to activate the selected build in the runtime environment.
3. **Branch Management**
* Use branches to manage draft builds independently. Each branch can have its own active policy, which allows parallel development without interference.
#### Technical Details
1. **Application Context and Resource References**
* The active policy is closely tied to the application context. It determines how resource references are handled during process execution.
* At runtime, the system identifies the active build based on the policy, ensuring that the correct resource definitions are used.
2. **Metadata and Versioning**
* Each active policy includes metadata about the selected build, including build ID, version tags, and branch references. This metadata helps track which version is running in each environment.
3. **Interaction with Runtime Environment**
* The runtime environment relies on the active policy to decide which build version of an application to execute. This is managed through an internal reference that points to the active build.
4. **Storage and Management**
* The active policy is stored separately from the build data, making it easy to update the active build without modifying the build itself. This separation ensures that changes to the active state do not affect the integrity of the builds.
#### Example Use Case
Imagine you have an application in production, and a new feature is developed. You:
* Use a **Draft Policy** to test the new feature in a development environment, pointing to the latest draft build.
* Once the feature is verified, create a build and switch the environment's active policy to a **Fixed Version**, ensuring that the feature is consistent and stable.
* In case a problem arises, you can easily revert the active policy to a previous build version without any reconfiguration.
#### Conclusion
The Active Policy feature in FlowX AI provides a flexible and reliable way to manage which version of an application is running in each environment. It offers a clear structure for testing, deployment, and rollback, making it a vital tool for maintaining stability and consistency throughout the application's lifecycle.
# Failed process start (exceptions)
Exceptions are types of errors meant to help you debug a failure in the execution of a process.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/process/exceptions1.png)
Exceptions can be accessed from multiple places:
* **Failed process start** tab from **Active process** menu in FlowX.AI Designer.
* **Process Status** view, accessible from **Process instances** list in FlowX.AI Designer.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/process/exceptions2.png)
If you open a process instance and it does not contain exceptions, the **Exceptions** tab will not be displayed.
### Exceptions data
When you click **view** button, a detailed exception will be displayed.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/process/exceptions_data.png)
* **Process Definition**: The process where the exception was thrown.
* **Source**: The source of the exception (see the possible type of [sources](#possible-sources) below).
* **Message**: A hint type of message to help you understand what's wrong with your process.
* **Type**: Exception type.
* **Cause Type**: Cause type (or the name of the node if it is the case).
* **Process Instance UUID**: Process instance unique identifier (UUID).
* **Token UUID**: The token unique identifier.
* **Timestamp**: The default format is - `yyyy-MM-dd'T'HH:mm:ss.SSSZ`.
* **Details**: Stack trace (a **stack trace** is a list of the method calls that the process was in the middle of when an **Exception** was thrown).
#### Possible sources
### Exceptions type
Based on the exception type, there are multiple causes that could make a process fail. Here are some examples:
| Type | Cause |
| :----------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Business Rule Evaluation | When executing action rules fails for any reason. |
| Condition Evaluation | When executing action conditions. |
| Engine |
When the connection with the database fails
when the connection with [Redis](../../platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-redis) fails
|
| Definition | Misconfigurations: process def name, subprocess parent process id value, start node condition missing. |
| Node | When an outgoing node can’t be found (missing sequence etc). |
| Gateway Evaluation |
When the token can’t pass a gateway for any reason, possible causes:
Missing sequence/node
Failed node rule
|
| Subprocess | Exceptions will be saved for them just like for any other process, parent process ID will also be saved (we can use this to link them when displaying exceptions). |
# Process instance
A process instance is a specific execution of a business process that is defined on the FlowX.AI platform. Once a process definition is added to the platform, it can be executed, monitored, and optimized by creating an instance of the definition.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/proc_inst.png)
## Overview
Once the desired processes are defined in the platform, they are ready to be used. Each time a process needs to be used, for example each time a customer wants to request a new credit card, a new instance of the specified process definition is started in the platform. Think of the process definition as a blueprint for a house, and of the process instance as each house of that type being built.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/process_instance_fin.png)
The **FlowX.AI Engine** is responsible for executing the steps in the process definition and handling all the business logic. The token represents the current position in the process and moves from one node to the next based on the sequences and rules defined in the exclusive gateways. In the case of parallel gateways, child tokens are created and eventually merged back into the parent token.
Kafka events are used for communication between FLOWX.AI components such as the engine and integrations/plugins. Each event type is associated with a Kafka topic to track and orchestrate the messages sent on Kafka. The engine updates the UI by sending messages through sockets.
## Checking the Process Status
To check the status of a process or troubleshoot a failed process, follow these steps:
1. Open **FlowX.AI Designer**.
2. Go to **Processes → Active Process → Process instances**.
3. Click **Process status** icon.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/process/process_status.png)
## Understanding the Process Status data
Understanding the various elements within process status data is crucial. Here's what each component entails:
* The **Status** field indicates the state of the process instance, offering distinct values:
| Status | Indicates the state of the process instance. Offers distinct values: |
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
| **CREATED** | Visible if there's an error during process creation. Displays as `STARTED` if there were no errors during creation. |
| **STARTED** | Indicates the current running status of the process. |
| **DISMISSED** | Available for processes with subprocesses, seen when a user halts a subprocess. |
| **EXPIRED** | This status appears when the defined "expiryTime" expression in the process definition elapses since the process was initiated (`STARTED`). |
| **FINISHED** | Signifies successful completion of the process execution. |
| **TERMINATED** | Implies a termination request has been sent to the instance. |
| **ON HOLD** | Marks a state where the process is no longer editable. |
| **FAILED** | Occurs if a CronJob triggers at a specific hour, and the instance isn't finished by then. |
* **Active process instance**: The UUID of the process instance, with a copy action available.
* **Variables**: Displayed as an expanded JSON.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/process_variables_new.png)
* **Tokens**: A token represents the state within the process instance and describe the current position in the process flow.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/process/process_tokens.png)
For more information about token status details, [here](../token).
* **Subprocesses**: Displayed only if the current process instance generated a [subprocess](./subprocess) instance.
* **Exceptions**: Errors that let you know where the process is blocked, with a direct link to the node where the process is breaking for easy editing.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/process/process_exceptions.png)
For more information on token status details and exceptions, check the following section:
* **Audit Log**: The audit log displays events registered for process instances, tokens, tasks, and exceptions in reverse chronological order by timestamp.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/process/process_status_audit.png)
### Color coding
In the **Process Status** view, some nodes are highlighted with different colors to easily identify any failures:
* **Green**: Nodes highlighted with green mark the nodes passed by the [token](../token).
* **Red**: The node highlighted with red marks the node where the token is stuck (process failure).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/process/color_coding.gif)
## Starting a new process instance
To start a new process instance, a request must be made to the [FlowX.AI Engine](../../platform-deep-dive/core-components/flowx-engine). This is handled by the web/mobile application. The current user must have the appropriate role/permission to start a new process instance.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/process/process_instance_diagram.png)
To be able to start a new process instance, the current user needs to have the appropriate role/permissions:
When starting a new process instance, we can also set it to [inherit some values from a previous process instance](../../platform-deep-dive/core-components/flowx-engine#orchestration).
## Troubleshooting possible errors
If everything is configured correctly, the new process instance should be visible in the UI and added to the database. However, if you encounter issues, here are some common error messages and their possible solutions:
Possible errors include:
| Error Message | Description |
| ------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ |
| *"Process definition not found."* | The process definition with the requested name was not set as published. |
| *"Start node for process definition not found."* | The start node was not properly configured. |
| *"Multiple start nodes found, but start condition not specified."* | Multiple start nodes were defined, but the start condition to choose the start node was not set. |
| *"Some mandatory params are missing."* | Some parameters set as mandatory were not included in the start request. |
| `HTTP code 403 - Forbidden` | The current user does not have the process access role for starting that process. |
| `HTTP code 401 - Unauthorized` | The current user is not logged in. |
# Builds
The Build feature allows for the creation of deployable packages of an application, encapsulating all its resources into a single unit.
A build is a snapshot of a specific application version, packaged and prepared for deployment to a runtime environment. The concept of builds plays a crucial role in ensuring that the correct version of an application, with all its configurations and dependencies, runs consistently across different environments (e.g., Dev, UAT, Production).
#### Key concepts
1. **Application Version vs. Build**
* **Application Version** is the editable snapshot of an application's state at a specific point in time. It contains all the resources (like processes, integrations, templates, etc.) and configurations grouped within the application.
* A **Build** is the deployable package of an application version. It is the compiled and immutable state that contains all the application resources transformed into executable components for the runtime environment.
2. **Single Version Rule**
* A build can only represent a single version of an application. If you have multiple versions of an application, each one can have a unique build. This ensures that you can clearly manage which version of the application is running in a specific environment without conflicts.
3. **Consistency and Deployment**
* Builds ensure that when you deploy an application to a different environment (like moving from Dev to UAT), the exact configuration, processes, templates, integrations, and all associated resources remain consistent.
* Builds are immutable—once created, a build cannot be altered. Any updates require creating a new version of the application and generating a new build.
#### Creating a build
1. **Develop the Application**
* First, work on the application in the Config mode. This involves creating processes, adding resources, and configuring integrations. Multiple versions of the application can be created and saved as drafts.
2. **Submit Changes to Version**
* When the application is ready for deployment, submit the changes to create a specific version of the application. This step involves finalizing the current configuration, grouping all resources and changes, and marking them under a specific version ID.
3. **Generate a Build**
* In the application settings, select the version you want to deploy and generate a build. This step compiles the selected application version into a package, converting all the resource definitions into executable components for the runtime environment.
* You can specify build metadata such as version tags, which help identify and manage builds during deployment.
4. **Publish the Build**
* Once the build is generated, it can be published. The published build is now available for execution in the chosen runtime environment, making it the active version that responds to process and integration calls.
#### Key features
1. **Immutability**
* Builds are immutable, ensuring that once a build is created, it reflects a fixed state of the application. This immutability prevents accidental changes and ensures consistency.
2. **Checksum and Integrity**
* Each build includes a checksum or identifier to verify its integrity. This ensures that the deployed build matches the expected configuration and no changes have been made post-build.
3. **Runtime Dependency**
* The runtime environment relies on builds to determine the active application configuration. This means the runtime does not directly interact with the editable application versions but uses builds to maintain stability and reliability.
4. **Version Control**
* Builds are version-controlled, and each build corresponds to a specific application version. This means you can trace exactly which application configuration is active in a particular environment at any time.
#### Benefits
* **Consistency Across Environments**: Builds ensure that the same version of an application runs in different environments, avoiding discrepancies between development and production.
* **Reduced Errors**: Immutable builds reduce the risk of runtime errors caused by unexpected changes.
* **Simplified Rollbacks**: If an issue is detected with a current build, previous builds can be deployed seamlessly, providing an efficient rollback strategy.
* **Streamlined Deployment**: Builds allow for a straightforward deployment process, reducing the complexity of transferring configurations and resources between environments.
#### Build management
1. **Creating a Build**
* Go to the application settings.
* Select the application version you want to deploy.
* Click the **Create Build** button and follow the prompts to configure build settings, including adding metadata or tags.
2. **Viewing and Managing Builds**
* Access the list of builds for a specific application through the Builds section.
* Each build entry provides details like version number, creation date, creator, and status (e.g., Draft, Published).
3. **Publishing a Build**
* Once a build is verified and ready, publish it to make it the active build in the desired environment.
* Only one build can be active per environment at any given time.
#### Technical details
1. **Manifest and Metadata**
* Each build contains a manifest that lists all the resources included, their version IDs, and their resource definitions.
* Metadata helps identify which application version the build is derived from, providing traceability.
2. **Separation of Design and Runtime**
* Builds serve as a bridge between the design (config) view and the runtime view. The config view is where changes are made and managed, while the runtime view uses builds to execute stable, tested versions of applications.
3. **Storage**
* Builds are stored in a dedicated storage system to ensure they are separate from the editable application versions. This storage supports easy retrieval and deployment of builds.
#### Example use case
Imagine you have an application for customer onboarding with multiple processes, templates, and integrations. After completing development and final testing, you:
* Submit changes to create **Version 1.0** of the application.
* Create a build for **Version 1.0**.
* Deploy this build to the UAT environment for final stakeholder review.
* If any adjustments are needed, you go back to the application, make changes, and submit them as **Version 1.1**.
* Create a new build for **Version 1.1**, ensuring that the changes are encapsulated separately from the previous version.
Once everything is approved, you can publish **Version 1.1 Build** to the Production environment, ensuring all environments are aligned without manual reconfiguration.
#### Conclusion
The Build feature is essential for managing applications across multiple environments. It provides a clear and organized pathway from development to deployment, ensuring consistency, integrity, and stability throughout the application's lifecycle.
# Configuration parameters overrides
The Configuration parameters overrides feature allows you to set environment-specific values for certain parameters within an application.
This features enables flexibility and customization, making it possible to tailor an application’s behavior and integration settings according to the needs of each environment (e.g., Dev, UAT, Production). By defining overrides, users can easily manage variables like API endpoints, authentication tokens, environment-specific URLs, and other settings without altering the core application configuration.
#### Key Concepts
1. **Configuration Parameters**
* Configuration parameters are placeholders used throughout an application to store key settings and values. These parameters can include anything from URLs, API keys, and credentials to integration-specific details.
* Parameters can be defined at the application level and referenced throughout processes, business rules, integrations, and templates.
2. **Environment Overrides**
* Overrides are specific values assigned to configuration parameters for a particular environment. This ensures that an application behaves appropriately for each environment without requiring changes to the application itself.
* Each environment (Development, UAT, Production) can have a unique set of overrides that replace the default parameter values when the application is deployed.
3. **Centralized Management**
* Configuration parameters and their overrides are managed centrally within the application’s settings. This allows for quick adjustments and a clear overview of how each environment is configured.
#### Workflow for Managing Configuration Parameter Overrides
1. **Defining Configuration Parameters**
* Go to the application settings and navigate to the **Configuration Parameters** section.
* Define parameters that will be used across the application, such as:
* **CRM URL**: Base URL for CRM integration.
* **API Key**: Secure key for external service authentication.
* **Environment URL**: Different base URLs for various environments (Dev, UAT, Production).
2. **Setting Up Environment-Specific Overrides**
* Access the **Overrides** section for configuration parameters.
* For each environment (e.g., Dev, UAT), assign specific values to the defined parameters. For example:
* Dev Environment: `CRM_URL` → `https://dev.crm.example.com`
* UAT Environment: `CRM_URL` → `https://uat.crm.example.com`
* Save the environment-specific overrides. These values will take precedence over the default settings when deployed in that environment.
3. **Applying Overrides at Runtime**
* When an application is deployed to a specific environment, the system automatically applies the relevant overrides.
* Overrides ensure that the configuration aligns with environment-specific requirements, like endpoint adjustments or security settings, without altering the core setup.
#### Key Features
1. **Flexibility Across Environments**
* Easily adjust configuration parameters without changing the core application, ensuring that the application adapts to the target environment seamlessly.
2. **Centralized Parameter Management**
* Configuration parameters and their environment-specific overrides are managed centrally within the application settings, providing a single source of truth.
3. **Dynamic Integration Adaptation**
* Environment-specific overrides allow integrations to connect to different endpoints or use different credentials based on the environment, supporting a smoother development-to-production workflow.
4. **Improved Security**
* Sensitive data, such as API keys or credentials, can be stored as environment-specific overrides, ensuring that development and testing environments do not share the same sensitive values as production.
#### Benefits
* **Simplified Deployment**: Overrides eliminate the need for manual configuration changes during deployment, streamlining the process and reducing the risk of errors.
* **Environment Consistency**: Each environment can have its own tailored settings, ensuring consistent behavior and performance across all stages.
* **Secure Configuration**: Overrides keep sensitive parameters isolated to the appropriate environments, enhancing security and minimizing the risk of exposure.
* **Ease of Maintenance**: Centralized management of parameters and overrides simplifies adjustments when changes are needed, minimizing maintenance overhead.
#### Managing Configuration Parameters Overrides
1. **Creating a Parameter**
* In the application settings, navigate to the **Configuration Parameters** section.
* Define a new parameter by specifying its name, type, and default value.
* This parameter will be used as a placeholder throughout the application, referenced in processes, templates, and integrations.
2. **Assigning Overrides**
* Access the **Overrides** tab for the configuration parameters.
* Choose the environment (e.g., Dev, UAT, Production) and assign a specific override value for each parameter.
* Confirm the changes to apply the overrides for the selected environment.
3. **Viewing and Editing Overrides**
* From the application’s settings, you can review all environment-specific overrides in one place.
* Edit existing overrides or add new ones as needed to adapt to changes in environment requirements.
#### Technical Details
1. **Parameter Resolution at Runtime**
* During runtime, the system retrieves the active configuration parameters for the application. If overrides are set for the environment, they replace the default values.
* This resolution ensures that the correct parameters are used, based on the environment in which the application is running.
2. **Integration with Business Rules and Processes**
* Configuration parameters can be directly referenced in business rules and processes. Overrides ensure that these references point to the correct environment-specific values during execution.
3. **Storage and Management**
* Configuration parameters and their overrides are stored centrally, and the values are retrieved dynamically during runtime. This centralized approach ensures that updates to parameters are automatically reflected in the application without needing a new build.
#### Example Use Case
Imagine you have an application that integrates with a third-party CRM system. You:
* Define a `CRM_URL` parameter with a default value for the development environment.
* Create overrides for UAT and Production environments with their respective URLs.
* When the application is deployed to the UAT environment, the system automatically uses the UAT-specific CRM URL for all integration points without changing the application’s core configuration.
* When it's time to deploy to Production, the system uses the production-specific overrides, ensuring that the application connects to the right CRM instance.
#### Conclusion
The Configuration parameters overrides empowers users to manage and tailor application behavior dynamically across environments. This flexibility enhances security, consistency, and control, ensuring that applications run smoothly and securely throughout the development and deployment lifecycle.
# Add new Kafka exchange mock
GET {MOCK_ADAPTER_UR}/api/kafka-exchanges/
View all available Kafka exchanges
The URL of the mock adapter.
The mocked JSON message that the integration will send
The JSON message the integration should reply with
Should match the topic the engine listens on for replies from the integration
Should match the topic name that the integration listens on
# Enable misconfigurations
GET {{baseUrlAdmin}}/api/process-versions/compute
To enable and to compute warnings for already existing processes from previous FlowX versions (< 4.1), you must use the following endpoint to compute all the warnings.
The URL of admin.
This is the specific operation performed to compute misconfigurations for older processes.
```bash Request
curl --request GET \
--url {baseUrlAdmin}/api/process-versions/compute
```
```
{ "status": "success" }
```
# Download a file
GET documentURL/internal/storage/download
This endpoint allows you to download a file by specifying its path or key.
The base URL of the document.
A segment of the path that specifies it is an internal call.
A segment of the path referring to storage resources.
The unique identifier for the download.
# Process status
GET {ENGINE_URL}/api/process/{PROCESS_INSTANCE_ID}/status
Processes can be started by making an API call. Certain parameters needed by the process can be sent on the request body.
Some special cases for stating process instances are:
* starting a process instance from another instance and inhering some data from the first one to the second
* a process can have multiple start nodes, in which case, a start condition must be set when making the start process call
The name of the process definition to instantiate.
The URL of the engine.
# Start process
POST {ENGINE_URL}/api/process/{PROCESS_DEFINITION_NAME}/start
The name of the process definition to instantiate.
The URL of the engine.
# Start process and inherit values
POST {ENGINE_URL}/api/process/{PROCESS_DEFINITION_NAME}/start/inheritFrom/{RELATED_PROCESS_INSTANCE_UUID}
The `paramsToInherit` map should hold the needed values on one the following keys, depending on the desired outcome:
* `paramsToCopy` - this is used to pick only a subset of parameters to be inherited from the parent process; it holds the list of key names that will be inherited from the parent parameters
* `withoutParams` - this is used in case we need to remove some parameter values from the parent process before inheriting them; it holds the list of key names that will be removed from the parent parameters
If none of these keys have values, all the parameter values from the parent process will be inherited by the new process.
The UUID of the related process instance from which values will be inherited.
The name of the process definition to be started.
A map containing information about which values to copy from the related process instance.
# Execute action
POST {ENGINE_URL}/api/process/{PROCESS_INSTANCE_ID}/token/{TOKEN_INSTANCE_ID}/action/{ACTION_NAME}/execute
The name of the action to run.
The token instance ID.
The process instance ID.
# List buckets
GET documentURL/internal/storage/buckets
The Documents Plugin provides the following REST API endpoints for interacting with the stored files.
This endpoint returns a list of available buckets.
The base URL of the document.
A segment of the path that specifies it is an internal call.
A segment of the path referring to storage resources.
The particular resource in the storage being accessed.
# List objects in a bucket
GET documentURL/internal/storage/buckets/{BUCKET_NAME}
This endpoint retrieves a list of objects stored within a specific bucket. Replace `{BUCKET_NAME}` with the actual name of the desired bucket
The base URL of the document.
A segment of the path that specifies it is an internal call.
A segment of the path referring to storage resources.
The particular resource in the storage being accessed.
The unique identifier for the storage bucket.
# View Kafka exchanges
GET {MOCK_ADAPTER_UR}L/api/kafka-exchanges/
View all available Kafka exchanges
The URL of the mock adapter.
# FlowX.AI Advancing Controller
The Advancing Controller is a support service for the FlowX.AI Engine that enhances the efficiency of advancing operations. It facilitates equal distribution and redistribution of the workload during scale-up and scale-down scenarios.
To achieve its functionality, the Advancing Controller microservice utilizes database triggers in both PostgreSQL and OracleDB configurations.
A database trigger is a function that is automatically executed whenever specific events, such as inserts, updates, or deletions, occur in the database.
## Usage
The Advancing Controller service is responsible for managing and optimizing the advancement process in the PostgreSQL or OracleDB databases. It ensures efficient workload distribution, performs cleanup tasks, and monitors the status of worker pods.
If a worker pod fails, the service reassigns its work to other pods to prevent [**process instances**](../../building-blocks/process/process-instance) from getting stuck.
It is essential to have the Advancing Controller service running alongside the FlowX Engine for uninterrupted instance advancement.
It is important to ensure that both the FlowX Engine and the Advancing Controller microservices are up and running concurrently for optimal performance.
## Configuration
For detailed instructions on how to set up the Advancing Controller microservice, refer to the following guide:
# FlowX.AI Events Gateway
The FlowX Events Gateway is a service that centralizes the communication with Server-Sent Events (SSE) messages from Backend to Frontend.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/events_gateway_archi.png)
The FlowX Events Gateway serves as a central component for handling and distributing events within the system:
### Event processing
The Events Gateway system is responsible for receiving and processing events from various sources, such as the [**FlowX Engine**](./flowx-engine) and [**Task Management**](../plugins/custom-plugins/task-management/task-management-overview). It acts as an intermediary between these systems and the rest of the system components.
It is a reusable component and is also used in administration scenarios to provide feedback without a refresh—for example, [**misconfigurations**](../../../../release-notes/v4.1.0-may-2024/v4.1.0-may-2024#misconfigurations) in platform version 4.1, allowing an error to be displayed in real-time during configuration.
### Message distribution
The Events Gateway system reads messages from the Kafka topic (`messaging_events_topic`) and distributes them to relevant components within the system. It ensures that the messages are appropriately routed to the intended recipients for further processing.
### Event publication
The Events Gateway system plays a crucial role in publishing events to the frontend renderer (FE renderer). It communicates with the frontend renderer using `HTTP` via `WebFlux`. By publishing events, it enables real-time updates and notifications on the user interface, keeping the user informed about the progress and changes in the system.
It is designed to efficiently send updates to the frontend in the following scenarios:
* When reaching a specific [**User Task (UT)**](../../building-blocks/node/user-task-node), a notification is sent to ensure the corresponding screen is rendered promptly.
* When specific actions require data to be sent to the user interface from a node.
### Integration with Redis
[**Redis**](../../platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-redis) plays an important role within the platform. It handles every message and is mandatory for the platform to function correctly, especially with the frontend. The platform relies on Redis to ensure that the messages are distributed efficiently and that the correct instance with the SSE connection pointer for the recipient is always reached.
The events-gateway system also interacts with Redis to publish events on a stream. This allows other components in the system to consume the events from the Redis stream and take appropriate actions based on the received events.
In these situations, the FlowX Engine places a message on Kafka for the Events Gateway. The Events Gateway then retrieves the message and stores it in Redis.
In summary, the events-gateway system acts as a hub for event processing, message distribution, and event publication within the system. It ensures efficient communication and coordination between various system components, facilitating real-time updates and maintaining system consistency.
For more details about how to configure events-gateway microservice, check the following section:
# FlowX.AI Audit
The Audit service provides a centralized location for all audit events. The following details are available for each event.
* **Timestamp** - the date and time the event occurred, the timestamp is displayed in a reversed chronologically order
* **User** - the entity who initiated the event, could be a username or a system
* **Subject** - the area or component of the system affected by the event
Process Instance
Token
Task
Exception
Process definition
Node
Action
UI Component
General Settings
Swimlane
Swimlane Permissions
Connector
Enumeration
Enumeration Value
Substitution Tag
Content Model
Language
Source System
Image
Font file
* **Event** - the specific action that occurred
Create
Update
Update bulk
Update state
Export
Import
Delete
Clone
Start
Start with inherit
Advance
View
Expire
Message Send
Message Receive
Notification receive
Run scheduled action
Execute action
Finish
Dismiss
Retry
Abort
Assign
Unassign
Hold
Unhold
* **Subject identifier** - the name related to the subject, there are different types of identifiers based on the selected subject
* **Version** - the version of the process definition at the time of the event
* **Status** - the outcome of the event (e.g. success or failure)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/audit_log_new.png)
## Filtering
Users can filter audit records by event date and by selecting specific options for User, Subject, and Subject Identifier.
* Filter by event date
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/audit_filter_by_event.png)
* **User** - single selection, type at least 4 characters
* **Subject** - single selection
* **Subject identifier** - exact match
## Audit log details
To view additional details for a specific event, users can click the eye icon on the right of the event in the list. Additional information available in the audit log details window includes
Here you have the following information:
* **Event** - the specific action that occured
* **URL** - the URL associated with the event
* **Body** - any additional data or information related to the event
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/audit_log_details.png)
More details on how to deploy Audit microservice
# FlowX.AI CMS
The FlowX.AI Headless Content Management System (CMS) is a core component of the FlowX.AI platform, designed to enhance the platform's capabilities with specialized functionalities for managing taxonomies and diverse content types.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/content_management.png#center)
## Key features
The FlowX.AI CMS offers the following features:
Manage and configure enumerations for various content types
Utilize tags to dynamically insert content values
Support multiple languages for content localization.
Integrate with various external source systems
Organize and manage media assets
## Deployment and integration
The CMS can be rapidly deployed on your chosen infrastructure, preloaded with necessary taxonomies or content via a REST interface, and integrated with the FlowX Engine using Kafka events.
For deployment and configuration, refer to the:
## Using the CMS service
Once the CMS is deployed in your infrastructure, you can define and manage custom content types, such as lists with different values based on external systems, blog posts, and more.
### Kafka integration
You can use Kafka to translate/extract content values based on your defined lanaguages and source systems.
#### Request content
Manage content retrieval messages between the CMS and the FlowX Engine using the following Kafka topics:
| Environment Variable | Default FLOWX.AI value (Customizable) |
| ----------------------------------- | ------------------------------------------------------------------ |
| KAFKA\_TOPIC\_REQUEST\_CONTENT\_IN | ai.flowx.dev.plugin.cms.trigger.retrieve.content.v1 |
| KAFKA\_TOPIC\_REQUEST\_CONTENT\_OUT | ai.flowx.dev.engine.receive.plugin.cms.retrieve.content.results.v1 |
* `KAFKA_TOPIC_REQUEST_CONTENT_IN`: This variable defines the topic used by the CMS to listen for incoming content retrieval requests.
* `KAFKA_TOPIC_REQUEST_CONTENT_OUT`: This variable defines the topic where the CMS sends the results of content retrieval requests back to the FlowX Engine.
You can find the defined topics in two ways:
1. In the FlowX.AI Designer: Go to **Platform Status** -> **cms-core-mngt** -> **kafkaTopicsHealthCheckIndicator** -> **Details** -> **Configuration** -> **Topic** -> **Request** -> **Content** (use the `in` topic).
![Kafka Topics in FlowX.AI Designer](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/kafka-translate-cms-topics.png)
2. Alternatively, check the CMS microservice deployment for the `KAFKA_TOPIC_REQUEST_CONTENT_IN` environment variable.
#### Example: Request a label by language or source system code
To translate custom codes into labels using the specified [language](./languages.mdx) or [source system](./source-systems), use the following request format. For instance, when extracting values from a specific enumeration for a UI component:
For example when you want to use a UI component where you want to extract values from an specific enumeration.
Various external systems and integrations might use different labels for the same information. In the processes, it is easier to use the corresponding code and translate this into the needed label when necessary: for example when sending data to other integrations, when generating documents, etc.
#### Request content request
Add a [**Send Message Task** (kafka send event)](/4.1.x/docs/building-blocks/node/message-send-received-task-node) and configure it to send content requests to the FlowX.AI Engine.
The following values are expected in the request body of your Send Message Taks node:
* At least one of `language` and `sourceSystem` should be defined (if you only need the `sourceSystem` to be translated, you can leave `language` empty and vice versa, but they cannot both be empty)
* A list of `entries` and their `codes` to be translated
**Expected Request Body:**
```json
{
"language": "en",
"sourceSystem": "FlowX",
"entries": [
{
"codes": [
"ROMANIA",
"BAHAMAS"
],
"contentDescription": {
"name": "country", //the name of the enumeration we used in this example
"version": 1, //optional, only if you want to extract from a specific version of your enumeration
"draft": true //optional
}
}
]
}
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/kafka-translate-cms.png)
The `version` and `draft` fields are optional. If not specified, the latest published content will be used.
#### Request content response
Add a **Receive Message Task** to handle the response from the CMS service. Configure it to listen to the topic where the Engine sends the response, e.g., we have the `ai.flowx.updates.contents.values.v1` topic.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/kafka-translate-cms-response.png)
**Response Message Structure**:
```json
{
"entries": [
{
"contentName": "country",
"code": "ROMANIA",
"label": "Romania",
"translatedCode": "ROMANIA-FlowX"
},
{
"contentName": "country",
"code": "BAHAMAS",
"label": "Bahamas",
"translatedCode": "BAHAMAS-FlowX"
}
],
"error": null
}
```
Next, we will change the system language and modify our process to display translations dinamycally on a another key on a separate screen.
# Enumerations
A collection of values that can be utilized as content in UI components or templates is managed using enumerations. Values can be defined for certain source systems or languages.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/enumerations.png)
On the main screen inside **Enumerations**, you have the following elements:
* **Name** - the name of the enumeration
* **Version** - the version of the enumeration
* **Draft** - switch button used to control the status of an enumeration, could be **Draft** or **Published**
* **Last Updated** - the last time an enumeration has been updated
* **Open** - button used to access an enumeration to configure it/ add more values, etc.
* **Delete** - button used to delete an enumeration
* **New enumeration** - button used to create a new enumeration
* **Breadcrumbs** - **Import/Export**
For each entry (when you hit the **Open** button) inside an enumeration we have to define the following properties:
* **Code** - not displayed in the end-user interface, but used to assure value uniqueness
* **Labels** - strings that are displayed in the end-user interface, according to the language set for the generated solution
* **External source systems codes** - values that are set for each external system that might consume data from the process; these codes are further used by connectors, in other to send to an external system a value that it can validate
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/enumerations1.png)
### Adding a new enumeration
To add a new enumeration, follow the next steps:
1. Go to **FLOWX Designer** and select the **Content Management** tab.
2. Select **Enumerations** from the list.
3. Add a suggestive name for your enumeration and then click **Add**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/adding_new_enum.png)
### Configuring an enumeration
After creating an enumeration, you can add values to it.
To configure an enumeration value, follow the next steps:
1. Go to FLOWX.AI Designer and select the **Content Management** tab.
2. Select **Enumerations** from the list and open an enumeration.
3. Click **New value** and fill in the necessary details:
* **Code** - as mentioned above, this is not displayed in the end-user interface but is used to assure value uniqueness
* **Labels** - set the value of the string for each language you would like to use
* **Source Systems** - values that are set for each external system that might consume data from the process
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/enum_configuration.gif)
### Creating a child collection
Enumerations can also be defined as a hierarchy - for each entry, we can define a list of children values (for example, name of the countries defined under the continents' enumeration values); hierarchies further enable cascading values in the end-user interface (for example, after selecting a continent in the first select [UI component](../../../building-blocks/ui-designer/ui-designer.mdx#ui-component-types), the second select component will contain only the children of this continent).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/enum_child_collection.png)
### Importing/exporting an enumeration
You can use the import/export feature to import or export enumerations using the following formats:
* JSON
* CSV
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/import_export_enum.png)
### Enumerations example
Enumerations, for instance, can be used to build elaborate lists of values (with children). Assuming you wish to add enumerations for Activity Domain Companies, you can create children collections by grouping lists and other related domains and activities.
We have the following example for ***Activity Domain Companies***:
#### **Activity Domain Companies → Agriculture forestry and fishing:**
* **Agriculture, hunting, and related services →**
Cultivation of non-perennial plants:
* Cultivation of cereals (excluding rice), leguminous plants and oilseeds
* Cultivation of rice
* Growing of vegetables and melons, roots and tubers
* Cultivation of tobacco
Cultivation of plants from permanent crops:
* Cultivation of grapes
* Cultivation of grapes
* Cultivation of seeds and stone fruits
* Cultivation of oil seeds
Animal husbandry:
* Raising of dairy cattle
* Raising of other cattle
* Raising horses and other horses
* **Forestry and logging →**
Forestry and other forestry activities:
* Forestry and other forestry activities
Logging:
* Logging
Collection of non-wood forest products from spontaneous flora:
* Collection of non-wood forest products from spontaneous flor
* **Fisheries and aquaculture →**
Fishing:
* Sea fishing
* Freshwater fishing
Aquaculture:
* Maritime aquaculture
* Freshwater aquaculture
This is the output after adding all the lists/collections from above:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/enumerations_output.gif)
# Media library
The media library serves as a centralized hub for managing and organizing various types of media files, including images, GIFs, and more. It encompasses all the files that have been uploaded to the **processes**, providing a convenient location to view, organize, and upload new media files.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release-notes/media_library.gif)
You can also upload an image directly to the Media Library on the spot when configuring a process using the [**UI Designer**](/4.0/docs/building-blocks/ui-designer/ui-designer). More information [**here**](/4.0/docs/building-blocks/ui-designer/ui-component-types/image).
## Uploading a new asset
To upload an asset to the Media Library, follow the next steps:
1. Open **FlowX Designer**.
2. Go to **Content Management** tab and select **Media Library**.
3. Click **Add new item**, the following details will be displayed:
* **Upload item** - opens a local file browser
* **Key** - the key must be unique, you cannot change it afterwards
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/media_library_add_new.png)
4. Click **Upload item** button and select a file from your local browser.
5. Click **Upload item** button again to upload the asset.
* **Supported image formats**: PNG, JPEG, JPG, GIF, SVG or WebP format, 1 MB maximum size.
* **Supported files**: PDF documents, with a maximum file size limit of 10 MB.
## Displaying assets
Users can preview all the uploaded assets just be accessing the **Media Library**.
You have the following information about assets:
* Preview (thumbnail 48x48)
* Key
* Format ("-" for unknown format)
* Size
* Edited at
* Edited by
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/media_library_preview.png)
## Searching assets
You can search an asset by using its key (full or substring).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/search_asset.png)
## Replacing assets
You can replace an item on a specific key (this will not break references to process definitions).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/replace_asset.gif)
## Referencing assets in UI Designer
You have the following options when configuring image components using [UI Designer](../../../building-blocks/ui-designer/ui-designer):
* Source Location - here you must select **Media Library** as source location
* Image Key
* **Option 1**: trigger a dropdown with images keys - you can type and filter options or can select from the initial list in dropdown
* **Option 2**: open a popup with images thumbnails and keys then you can type and filter options or can select from the initial list
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/media_library_options.png)
More details on how to configure an image component using UI Designer - [**here**](/4.0/docs/building-blocks/ui-designer/ui-component-types/image).
## Icons
The Icons feature allows you to personalize the icons used in UI elements. By uploading SVG files through the Media Library and marking them, you can choose icons from the available list in the UI Designer.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/icons.png)
When selecting icons in the UI Designer, only SVG files marked as icons in the Media Library will be displayed.
To ensure optimal visual rendering and alignment within your UI elements, it is recommended to use icons with small sizes such as: 16px, 24px, 32px.
Using icons specifically designed for these sizes helps maintain consistency and ensures a visually pleasing user interface. It is advisable to select icons from icon sets that provide these size options or to resize icons proportionally to fit within these dimensions.
Icons are displayed or rendered at their original, inherent size.
### Customization
Content-specific icons pertain to the content of UI elements, such as icons for [input fields](../../../building-blocks/ui-designer/ui-component-types/form-elements/input-form-field) or [send message buttons](../../../building-blocks/ui-designer/ui-component-types/buttons). These icons are readily accessible in the [UI Designer](../../../building-blocks/ui-designer/).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/icon_add_ui.gif)
More details on how to add icons on each element, check the sections below:
## Export/import media assets
The import/export feature allows you to import or export media assets, enabling easy transfer and management of supported types of media files.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/media_library_export.png)
### Import media assets
Use this function to import media assets of various supported types. It provides a convenient way to bring in images, videos, or other media resources.
### Export all
Use this function to export all media assets stored in your application or system. The exported data will be in JSON format, allowing for easy sharing, backup, or migration of the media assets.
The exported JSON structure will resemble the following example:
```json
{
"images": [
{
"key": "cart",
"application": "flowx",
"filename": "maxresdefault.jpg",
"format": "jpeg",
"contentType": "image/jpeg",
"size": 39593,
"storagePath": "https://d22tnnndi9lo60.cloudfront.net/devmain/flowx/cart/1681982352417_maxresdefault.jpg",
"thumbnailStoragePath": "https://d22tnnndi9lo60.cloudfront.net/devmain/flowx/cart/1681982352417_thumbnail_maxresdefault.jpg"
},
{
"key": "pizza",
"application": "flowx",
"filename": "pizza.jpeg",
"format": "jpeg",
"contentType": "image/jpeg",
"size": 22845,
"storagePath": "https://d22tnnndi9lo60.cloudfront.net/devmain/flowx/pizza/1681982352165_pizza.jpeg",
"thumbnailStoragePath": "https://d22tnnndi9lo60.cloudfront.net/devmain/flowx/pizza/1681982352165_thumbnail_pizza.jpeg"
}
],
"exportVersion": 1
}
```
* `images`- is an array that contains multiple objects, each representing an image
* `exportVersion` - represents the version number of the exported data, it holds the image-related information
* `key`- represents a unique identifier or name for the image, it helps identify and differentiate images within the context of the application
* `application` - specifies the name or identifier of the application associated with the image, it indicates which application or system the image is related to
* `filename` - the name of the file for the image, it represents the original filename of the image file
* `format` - a string property that specifies the format or file extension of the image
* `contentType` - the MIME type or content type of the image, it specifies the type of data contained within the image file
* `size` - represents the size of the image file in bytes, it indicates the file's storage size on a disk or in a data storage system
* `storagePath` - the URL or path to the location where the original image file is stored, it points to the location from where the image can be accessed or retrieved
* `thumbnailStoragePath` - the URL or path to the location where a thumbnail version of the image is stored, it points to the location from where the thumbnail image can be accessed or retrieved
# Source systems
If multiple **enumerations** values are needed to communicate with other systems, source systems can be used.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/source_system.png)
On the main screen inside **Source systems**, you have the following elements:
* **Code** - not displayed in the end-user interface, but used to assure value uniqueness
* **Name** - the name of the source system
* **Edit** - button used to edit a source system
* **Delete** - button used to delete a source system
* **New** - button used to add a new source system
### Adding new source systems
To add a new source system, follow the next steps.
1. Go to **FlowX Designer** and select the **Content Management** tab.
2. Select **Source systems** from the list.
3. Fill in the necessary details:
* Code
* Name
4. Click **Add** after you finish.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/add_source_system.png)
# Substitution tags
Substitution tags are used to generate dynamic content across the platform. As **enumerations**, substitution tags can be defined for each language set for the solution.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/substitution_tags.png)
On the main screen inside **Substitution tags**, you have the following elements:
* **Key**
* **Values** - strings that are used in the end-user interface, according to the [language](./languages) set for the generated solution
* **Edit** - button used to edit substitution tags
* **Delete** - button used to delete substitution tags
* **New value** - button used to add a new substitution tag
* **Breadcrumbs menu**:
* **Import**
* from JSON
* from CSV
* **Export**
* to JSON
* to CSV
* **Search by** - search function used to easily look for a particular substitution tag
### Adding new substitution tags
To add a new substitution tag, follow the next steps.
1. Go to **FlowX Designer** and select the **Content Management** tab.
2. Select **Substitution tags** from the list.
3. Click **New value**.
4. Fill in the necessary details:
* Key
* Languages
5. Click **Add** after you finish.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/add_new_substitution.png)
When working with substitution tags or other elements that imply values from other languages defined in the CMS, when running a **process**, the default values extracted will be the ones marked by the default language.
### Getting a substitution tag by key
```
public func getTag(withKey key: String) -> String?
```
All substitution tags will be retrieved by the [**SDK**](../../../../sdks/angular-renderer) before starting the first process and will be stored in memory.
Whenever the container app needs a substitution tag value for populating the UI of the custom components, it can request the substitution tag using the method above, providing the key.
For example, substitution tags can be used to localize the content inside an application.
### Example
#### Localizing the app
You must first check and configure the FLOWX.AI Angular renderer to be able to replicate this example. Click [here](../../../../sdks/angular-renderer) for more information.
The `flxLocalize` pipe is found in the `FlxLocalizationModule`.
```typescript
import { FlxLocalizationModule } from 'flowx-process-renderer';
```
```typescript
import { Component } from '@angular/core';
@Component({
selector: 'app-dummy-component',
template: `
{{ "stringToLocalize" | flxLocalize}}
`,
})
export class DummyComponent{
stringToLocalize: string = `@@localizedString`
}
```
Strings that need to be localized must have the '**@@**' prefix which the **flxLocalize** pipe uses to extract and replace the string with a value found in the substitution tags enumeration.
Substitution tags are retrieved when a start process call is first made, and it's cached on subsequent start process calls.
# Configuration parameters
Configuration parameters allow applications to be dynamic, flexible, and environment-specific. They enable managing variables and values that change across deployment environments (e.g., Development, QA, Production), without requiring hardcoded updates. This feature is particularly valuable for managing sensitive information, environment-specific settings, and configurations for integrations.
Configuration Parameters are defined per **application version**, ensuring consistency across builds while allowing for environment-specific overrides. These parameters can be used across various components, such as business rules, UI elements, integration designer, and gateways.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-18%20at%2018.45.32.png)
***
## Default and override values
* **Default Value**: Defined during parameter creation and included in the application build. It serves as the fallback value when no environment-specific override is set.
* **Override Value**: A value defined post-deployment for a specific environment. Overrides take precedence during runtime.
For details on configuring runtime overrides, see:
***
## Types of configuration parameters
Configuration parameters are defined as **key-value pairs** and support the following types:
### Value
A static value directly used by the application. Suitable for settings that do not change across environments.
* **Use Cases**:
* Feature flags to toggle functionality (e.g., enabling/disabling insurance sales in a customer dashboard).
* Email addresses for notification recipients.
* Homebank redirect URLs for specific processes.
* **Example**:
* **Key**: `officeEmail`
* **Type**: `value`
* **Value**: `officeEmail@office.com`
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-19%20at%2010.02.09.png)
***
### Environment variable
References an external variable set by the DevOps team. These variables are defined per environment and referenced using a **name convention**.
* **Use Cases**:
* Environment-specific API base URLs.
* Dynamic configuration of services or integrations.
* **Example**:
* **Key**: `baseUrl`
* **Type**: `environment variable`
* **Value**: `BASE_URL` (name convention pointing to an externally defined value)
Configuration details:
| Key | Type | Value | Description |
| ------- | ---------------------- | ---------- | ------------------------------------------------------------ |
| baseUrl | `environment variable` | `BASE_URL` | A reference to the base URL configured externally by DevOps. |
**Example values for different environments**
| Environment | External Variable Name | Actual Value |
| ----------- | ---------------------- | ----------------------------- |
| Development | `BASE_URL` | `https://dev.example.com/api` |
| QA | `BASE_URL` | `https://qa.example.com/api` |
| Production | `BASE_URL` | `https://api.example.com` |
***
### Secret environment variable
Used for sensitive data like passwords, API keys, or credentials. These values are securely managed by DevOps and referenced using a **name convention**.
* **Use Cases**:
* Passwords or tokens for integrations.
* Secure configuration of external services in the integration designer.
* **Example**:
* **Key**: `dbPassword`
* **Type**: `secret environment variable`
* **Value**: `DB_PASSWORD`
***
## Use cases for configuration parameters
Configuration parameters simplify the management of environment-specific or dynamic settings across multiple application components:
1. **Business Rules**:
* Define dynamic logic based on parameters such as feature toggles or environment-specific conditions.
2. **UI Elements**:
* Configure content dynamically based on the environment (e.g., URLs for redirects, conditional features).
3. **Integration Designer**:
* Reference the token parameter.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-19%20at%2010.40.02.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-19%20at%2010.41.41.png)
4. **Gateways**:
* Dynamically manage routing and decision-making logic using environment-specific parameters.
## Adding a configuration parameter
To add a new configuration parameter:
1. Navigate to **Your Application** → **Configuration Parameters**.
2. Click **New Parameter** and provide the following:
* **Key**: Name of the parameter.
* **Type**: Select `value`, `environment variable`, or `secret environment variable`.
* **Value**:
* For `value`: Enter the static value.
* For `environment variable` or `secret environment variable`: Enter the agreed name convention.
3. Click **Save** to include the parameter in the application build.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-18%20at%2019.07.20.png)
***
## Technical notes and best practices
### Security
* **Sensitive Values (ENV\_SECRET)**:
* Store and manage securely.
* Do not display in the frontend or expose via public APIs.
* Avoid logging sensitive information.
### Environment-specific updates
* Environment variable updates (`ENV`/`ENV_SECRET`) are managed by DevOps.
* Updates may require service restarts unless a caching or real-time mechanism (e.g., change streams) is implemented.
### Reserved keys
* Certain keys are reserved for system use (e.g., `processInstanceId`). Avoid using reserved keys for custom configurations.
***
# FlowX.AI License Engine
The License Engine is part of the core extensions of the FlowX.AI platform. It is used for displaying reports regarding the usage of the platform in the FlowX.AI Designer.
It can be quickly deployed on the chosen infrastructure and then connected to the **FlowX Engine** through **Kafka** events.
Let's go through the steps needed in order to deploy and set up the service:
# FlowX.AI Scheduler
The Scheduler is part of the core extensions of the FlowX.AI platform. It can be easily added to your custom FlowX deployment to enhance the core platform capabilities with functionality specific to scheduling messages.
The service offers the possibility to schedule a message that you only need to process after a configured time period.
It can be quickly deployed on the chosen infrastructure and then connected to the **FLOWX.AI Engine** through Kafka events.
## Using the scheduler
After deploying the scheduler service in your infrastructure, you can start using it to schedule messages that you need to process at a later time.
One such example would be to use the scheduler service to expire processes that were started but haven't been finished.
First you need to check the configured kafka topics match the ones configured in the engine deployment.
For example the engine topics `KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_SET` and `KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_STOP` **must** be the same with the ones configured in the scheduler at `KAFKA_TOPIC_SCHEDULE_IN_SET` and `KAFKA_TOPIC_SCHEDULE_IN_STOP` environment variables.
When a process is scheduled to expire, the engine sends the following message to the scheduler service (on the topic `KAFKA_TOPIC_SCHEDULE_IN_SET`):
```json
{
"applicationName": "onboarding",
"applicationId": "04f82408-ee66-4c68-8162-b693b06bba00",
"payload": {
"scheduledEventType": "EXPIRE_PROCESS",
"processInstanceUuid": "04f82408-ee66-4c68-8162-b693b06bba00"
},
"scheduledTime": 1621412209.353327,
"responseTopicName": "ai.flowx.process.expire.staging"
}
```
The scheduled time should be defined as `java.time.Instant`.
At the scheduled time, the payload will be sent back to the response topic defined in the message, like so:
```json
{
"scheduledEventType": "EXPIRE_PROCESS",
"processInstanceUuid": "04f82408-ee66-4c68-8162-b693b06bba00"
}
```
If you don't need the scheduled message anymore, you can discard it by sending the following message (on the topic `KAFKA_TOPIC_SCHEDULE_IN_STOP`)
```json
{
"applicationName": "onboarding",
"applicationId": "04f82408-ee66-4c68-8162-b693b06bba00"
}
```
These fields, `applicationName` and `applicationId` are used to uniquely identify a scheduled message.
Steps needed in order to deploy and set up the service
# FlowX.AI Data Search
The Data Search service is a microservice that enables data searches within other processes. It facilitates the creation of processes capable of conducting searches and retrieving data by utilizing Kafka actions in tandem with Elasticsearch mechanisms.
Data Search service leverages Elasticsearch to execute searches based on indexed keys, using existing mechanisms.
## Using the Data Search service
### Use case
* Search for data within other processes
* Display results indicating where the search key was found in other processes
For our example, two process definitions are necessary:
* one process used to search data in another process - in our example ***"search\_process\_CDN"***
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/search_in_another_proc_34.png)
* one process where we look for data - in our example ***"add\_new\_clients"***
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/search_populate_data.png)
## Add data process example
Firstly, create a process where data will be added. Subsequently, the second process will be used to search for data in this initial process.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/addDataProc.png)
In the "Add Data Process Example" it's crucial to note that we add mock data here to simulate existing data within real processes.
Example of MVEL Business Rule:
```json
output.put ("application", {
"date": "22.08.2022",
"client": {
"identificationData": {
"firstName": "John",
"lastName": "Doe",
"cityOfBirth": "Anytown",
"primaryDocument": {
"number": 123456,
"series": "AB",
"issuedCountry": "USA",
"issuedBy": "Local Authority",
"issuedAt": "01.01.2010",
"type": "ID",
"expiresAt": "01.01.2030"
},
"countryOfBirth": "USA",
"personalIdentificationNumber": "1234567890",
"countyOfBirth": "Any County",
"isResident": true,
"residenceAddress": {
"country": "USA",
"city": "Anytown",
"street": "Main Street",
"streetNumber": 123
},
"mailingAddress": {
"country": "USA",
"city": "Anytown",
"street": "Main Street",
"streetNumber": 123
},
"pseudonym": null
},
}
}
);
```
Now we can play with this process and create some process instances with different states.
## Search process example
Configure the "Search process" to search data in the first created process instances:
Create a process using the [**Process Designer**](../../building-blocks/process/process).
Add a [**Task node**](../../building-blocks/node/task-node) within the process. Configure this node and add a business rule if you want to customize the display of results, e.g:
```java
output.put("searchResult", {"result": []});
output.put("resultsNumber", 0);
```
For displaying results in the UI, you can also consider utilizing [**Collections**](../../building-blocks/ui-designer/ui-component-types/collection/) UI element.
Add a **user task** and configure a send event using a [**Kafka send action**](../../building-blocks/node/message-send-received-task-node#send-message-task). Configure the following parameters:
The Kafka topic for the search service requests (defined at `KAFKA_TOPIC_DATA_SEARCH_IN` environment variable in your deployment).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/search_in_topic.png)
```json
{
"searchKey": "application.client.identificationData.lastName",
"value": "12344",
"processStartDateAfter": "YYY-MM-DD:THH:MM:SS", //optional, standard ISO 8601 date format
"processStartDateBefore": "YYY-MM-DD:THH:MM:SS", //optional, standard ISO 8601 date format
"processDefinitionNames": [ "processDef1", "processDef2"],
"states": ["STARTED", "FINISHED, ONHOLD"] //optional, depending if you want to filter process instances based on their status, if the parameter is ommited, the process will display all the statuses
}
```
* **searchKey** - represents the process key used to search data stored in a process
Indexing this key within the process is crucial for the Data Search service to effectively locate it. To enable indexing, navigate to your desired process definition and access **Process Settings → Task Management → Search indexing**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/indexed_key.png)
❗️ Keys are indexed automatically when the process status changes (e.g., created, started, finished, failed, terminated, expired), when swimlanes are altered, or when stages are modified. To ensure immediate indexing, select the 'Update in Task Management' option either in the **node configuration** or within **Process Settings → General** tab.
* **value** - the dynamic process key added on our input element that will store and send the data entered by a user to the front end
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/searchValue.png)
* **states** - `["STARTED", "FINISHED, ONHOLD" "..."]` - depending if you want to filter process instances based on their status, if the parameter is ommited, the process will display all the statuses
Check the Understanding the [Process Status Data](../../building-blocks/process/process-instance) section for more example of possible states.
* **Data to send (key)**: Used for validating data sent from the frontend via an action (refer to **User Task** configuration section)
* **Headers**: Mandatory - `{"processInstanceId": ${processInstanceId}}`
If you also use callbackActions, you will need to also add the following headers:
`{"destinationId": "search_node", "callbacksForAction": "search_for_client"}`
Example (dummy values extracted from a process):
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/body_message_search_service.png)
A custom microservice (a core extension) will receive this event and search the value in the Elasticsearch.
It will respond to the engine via a Kafka topic (defined at `KAFKA_TOPIC_DATA_SEARCH_OUT` env variable in your deployment). Add the topic in the **Node config** of the User task where you previously added the Kafka Send Action.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/search_result_topic.png)
### Response
The response's body message will look like this:
#### If there is no result:
```json
{
"result": [],
"searchKey": "application.client.name.identificationData.lastName",
"tooManyResults": "false",
"searchValue": "random"
}
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/noResults.png)
Example (dummy values extracted from a process):
To access the view of your process variables, tokens and subprocesses go to **FLOWX.AI Designer > Active process > Process Instances**. Here you will find the response.
#### If there is a list of results:
```json
{
"searchKey": "application.client.identificationData.personalIdentificationNumber"
"result":[{
"processInstanceUUID": "UUID",
"status": "FINISHED",
"processStartDate": date,
"data" : {"all data in elastic for that process"}
}],
"tooManyResults": true|false
}
```
**NOTE**: Up to 50 results will be received if `tooManyResults` is true.
Example with dummy values extracted from a process:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/search_data_response.png)
#### Developer
Enabling Elasticsearch indexing **requires** activating the configuration in the **FlowX Engine**. Check the [**indexing section**](../../../setup-guides/flowx-engine-setup-guide/configuring-elasticsearch-indexing/) for more details.
For deployment and service setup instructions
# Task management
Task Management in FlowX.AI is a core functionality that allows users to create, configure, and manage tasks within the platform, providing a structured way to handle work processes. It enables the definition of tasks based on business processes, offers tools for allocating, tracking, and managing tasks across various roles and departments, and supports customization through views, filters, and rules.
![Task Manager](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/tsk_mng2.0.png)
Task Management also includes capabilities for user roles, customizable tables, and integration with the application through both low-code and full-code approaches, ensuring flexibility for various use cases.
## Key features
* **Views**: Configurable interfaces to display task-related data based on process definitions.
* **Hooks**: Upon a token's entry/exit from the selected process, it initiates a specified process, swimlane, or stage based on the configured settings.
* **Stages**: used to monitor the progression of tasks within a process by identifying where it may have stalled. This functionality can be used for teams but also for various business stages: onboarding, verification, and validation processes.
* **Allocation Rules**: Facilitate the equal distribution of tasks among users with permissions in a specific swimlane.
## Task Management Views
Views offer a flexible way to tailor task data display according to business needs. By configuring views, users can create structured, customized interfaces that help specific roles, departments, or use cases access relevant data effectively.
Example of custom view:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-12%20at%2016.29.13.png)
### All tasks
In the All Tasks section, you can view all tasks generated from any process definition marked as "to be used in task manager" within an application. This provides a centralized view of all relevant tasks, as long as you have the appropriate permissions.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-11%20at%2018.32.13.png)
It provides an interactive visualization, with built-in filtering functionalities for refined data display.
Dataset Config and Table Config options are not accessible in the All Tasks view.
### Creating a view
In the Views section, you can create a new view by selecting a name and choosing a process definition. The process definition is crucial because it determines which keys you can access when configuring the view.
To set up a view, navigate to the Views section in the Task Management interface:
1. Click **Add View**.
2. Enter a **Name** for the view.
3. **Choose a Process Definition**: This will link the view's configuration to a specific process and its associated keys.
Once a view is created for a process (e.g., Process X), it cannot be reassigned to another process. This ensures consistent data structuring based on the selected process definition.
Upon creating a view, you are automatically redirected to configure its parameters.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/2024-11-11%2018.34.39.gif)
The Task Management default view contains four primary columns, with two additional default columns that can be added.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-11%20at%2018.37.30.png)
You also have the option to add custom parameters to the table, with no limit on the number of columns.
**Columns explained**:
* **Stage**: Indicates where the process halted, providing a clear view of its current state.
* **Assignee**: Displays the individual to whom the task is assigned.
* **Priority**: Reflects the urgency level of the task.
* **Last Updated**: Shows the timestamp of the most recent action taken on the task.
* **Title**: Displays the designated name of the task, which can be customized by using Business Rules.
* **Status**: Represents the current state of a Token, such as 'Started' or 'Finished.'
* **Custom parameters**: User-defined keys within the process settings, which become available only after their configuration is complete.
### Customer parameters and display names
### Display names
You can rename default and custom parameters to make them contextually relevant for various business needs. For example:
* Rename an address field to clarify if it refers to "Residence" or "Issuing Location."
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/2024-11-11%2019.13.07.gif)
* Use Substitution Tags for dynamic display names.
Renaming a parameter’s Display Name will only change how it’s shown in the interface, without altering the actual data model. The rename option is also available for default parameters (not just custom parameters). Changing the Display Name also allows the use of Substitution Tags.
### Custom Parameters in Task Management
Custom parameters in Task Management provide a way to tailor task displays and ensure that task data aligns with specific business needs and contexts.
**Key setup and configuration**
1. **Adding Custom Parameters**:
* In **Process Settings** → **Task Management**, you can define the custom keys that will be indexed and made available for tasks.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-11%20at%2019.11.02.png)
* Each custom parameter can be renamed to suit different business contexts, ensuring clarity in cases where parameters may have multiple meanings.
Ensure that the custom key exists in the **Data Model** before it can be mapped in Task Management.
If the **attribute type** of a custom key is modified after it has been indexed, the key must be re-indexed in the **Task Management** section. This re-indexing step is crucial to ensure that Task Management reflects the updated attribute type correctly.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-12%20at%2010.38.23.png)
For data from custom parameters to flow correctly, ensure that **Forms to Validate** is set on the **UI Action button** in your UI for the corresponding process. This configuration is necessary for custom parameters to be validated and included in Task Management.
2. **Labeling Custom Parameters**:
* When adding a custom parameter, use the rename option to assign it a label relevant to the process context (as demonstrated in the example [**above**](#display-names)).
* This allows parameters to remain flexible for different roles or departments, adapting to each use case seamlessly.
3. **Enabling Task Management Integration**:
* To ensure that data flows correctly into Task Management, enable the **Use process in task management** toggle in **Process Settings** within your **application** in **FlowX Designer**.
* Some actions may be restricted based on user roles and access rights, so confirm that your role allows necessary access.
4. **Configuring Node-Specific Updates**:
* To enable Task Manager to send targeted updates from specific parts of a process:
* In **FlowX Designer**, open the relevant **application** and then the desired **process definition** and click **Edit**.
* Select the node(s) where updates should be triggered and enable the **Update task management?** switch.
* You can configure this action for multiple nodes, allowing flexibility in tracking and updating based on process flow.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-12%20at%2013.01.54.png)
Activating the **Use process in task management** flag at both the process settings level and node level is essential for ensuring data consistency and visibility in Task Management.
### Table config and Dataset config in Task Management
You can use **Table Config** and **Dataset Config** to configure and filter task data effectively. These configurations help create a customized and user-friendly interface for different roles, departments, or organizational needs.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-12%20at%2013.41.17.png)
#### Table config
**Table Config** is used to define the structure and content of the Task Management table view. Here, you can configure the columns displayed in the table and set up default sorting options.
1. **Configuring the Table Columns**:
* **Default Columns**: By default, the table includes the following columns: **Stage**, **Assignee**, **Priority**, and **Last Updated**.
* You can add additional columns, such as **Title**, **Status**, and **Custom Parameters**. Custom parameters can be chosen from the keys configured in **Process Settings** → **Task Management**.
2. **Setting Default Sorting**:
* You can select one column for **default sorting** in ascending or descending order. This configuration helps prioritize how data is initially displayed, often based on “Last Updated” or other relevant fields.
* If no specific sorting rule is configured, the table will automatically apply sorting based on the **Last Updated** column.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-12%20at%2013.54.57.png)
#### Dataset config
**Dataset Config** is used to filter and refine the data displayed in Task Management views. This helps create targeted views based on specific needs, such as differentiating data for front office vs. back office or specific roles like managers and operators.
1. **Adding Filters**:
* You can apply filters on the keys brought into the **Dataset Config** to customize the data shown. Filters can be applied based on various data types, such as enums, strings, numbers, dates, and booleans.
2. **Filtering Options by Data Type**:
* **Enums**: Can be filtered using the `In` operator. Only parent enums are available for mapping in Task Management (ensure enums are mapped in the data model beforehand).
Before you can map enums in Task Management, they must be configured in the Data Model. Only parent enums can be mapped.
If any changes are made to the Data Model after the keys have been configured and indexed in Task Management, these changes will not be automatically reflected. You must re-add and re-index the keys in the process settings to ensure that the updated information is indexed correctly.
* **Strings**: Available filters include `Not equal`, `In`, `Starts with`, `Ends with`, `Contains`, and `Not contains`.
* **Numbers**: Filters include `Equals`, `Not equal`, `Greater than`, `Less than`, `Greater than or equal`, and `Less than or equal`.
* **Dates and Currencies**: Filters include `Equals`, `Not equal`, `Greater than`, `Less than`, `Greater than or equal`, `Less than or equal`, and `In range`.
* **Booleans**: Can be filtered using the `Equals` operator.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-12%20at%2017.07.35.png)
3. **Role-Specific Configurations**:
* **Dataset Config** allows creating views tailored to specific audiences, such as different departments or roles within an organization. However, note that filters in Dataset Config do not override user permissions on task visibility.
Example with filter applied a number attribute:
### Managing data model changes
While creating a view, you may want to modify the **data model**. It's important to note that changes to the **data model** do not directly impact the views. Views are tied to the process definition, not the data model. Therefore, if you make changes to the data model, you do not need to create a new view unless the changes also impact the underlying process.
### Task details
The **Task Details** tab within **Task Manager** provides key process information, including:
* **Priority**: Enables task prioritization.
* **Status**: The current process status (in our example, `STARTED`)
* **Stage**: Specific stages during process execution.
* **Comments**: User comments.
* **History**: Information such as task creation, creator, and status changes.
* **Last Updated**: Displays the most recent timestamp of any changes made to a task.
* **View Application**: Provides direct access to the application URL where the FlowX.AI process related to a specific task is running.
![Task details](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-12%20at%2017.46.01.png)
**Accessing Task details in Task Management**
To access the **Task Details** of a specific task in a Task Management view, follow these steps:
1. **Navigate to the Task Management Interface**:
* Open the Task Management section within the application and select the desired **View** that contains the list of tasks.
2. **Locate the Task**:
* In the selected view (e.g., All Tasks, Custom View), find the task you want to inspect. Use filters and sorting options if necessary to locate the task more efficiently.
3. **Open Task Details**:
* Click on the task or select the **Details** option (often represented by an icon or “Details” link) associated with the task entry.
* This action will open the **Task Details** panel, which provides an in-depth view of information specific to that task.
Please note that specific roles must be defined in a process to utilize all the task management features. For configuration details, see [**Configuring Access Roles for Task Manager**](../../../../setup-guides/plugins-access-rights/configuring-access-rights-for-task-management).
## Process status updates
Task Manager displays various statuses based on process state:
| Status | Definition |
| -------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Created** | This status is visible only if there is an issue with the process creation. If the process is error-free in its configuration, you will see the **Started** status instead. |
| **Started** | Indicates that the process is in progress and running. |
| **Finished** | The process has reached an end node and completed its execution. |
| **Failed** | Displayed when a [CronJob](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) is enabled in the [FlowX Engine](../../../core-components/flowx-engine). For example, if a CronJob is triggered but not completed on time, tasks move to the `FAILED` status. |
| **Expired** |
Displayed when expiryTime field is defined within the process definition. To set up an expiryTime function, follow these steps
Go to FLOWX Designer > Processes > Definitions.
Select a process and click the "⋮" button, then choose Settings.
Inside the General tab, you can edit the Expiry time field.
|
| **Aborted** | This status is available for processes that also contain subprocesses. When a subprocess is running (and the [token is moved backward](../../../../flowx-designer/managing-a-process-flow/moving-a-token-backwards-in-a-process) to redo a series of previous actions) - the subprocess will be aborted. |
| **Dismissed** | Available for processes that contain subprocesses. It is displayed when a user stops a subprocess. |
| **On hold** | Freezes the process, blocking further actions. A superuser can trigger this status for clarification or unfreeze. |
| **Terminated** | A request is sent via Kafka to terminate a process instance, ending all active tokens in the current process or subprocesses. |
## Swimlanes and Stages updates
Task Manager also tracks swimlane and stage changes:
### Swimlanes updates
| Status | Definition |
| ------------------ | ------------------------------------ |
| **Swimlane Enter** | Marks token entering a new swimlane. |
| **Swimlane Exit** | Indicates token exiting a swimlane. |
### Stages updates
| Status | Definition |
| --------------- | --------------------------------- |
| **Stage Enter** | Marks token entering a new stage. |
| **Stage Exit** | Indicates token exiting a stage. |
## Using the plugin
The Task Manager plugin offers a range of features tailored to different roles, including:
* [Swimlane permissions for Task Management](#swmlane-permissions-for-task-management)
* [Assigning and unassigning Tasks](#task-assignment-and-reassignment)
* [Hold/unhold tasks](#hold/unhold-tasks)
* [Adding comments](#adding-comments)
* [Viewing the application](#viewing-the-application)
* [Bulk updates (via Kafka)](#bulk-updates)
### Swimlane permissions for Task Management
To perform specific actions within Task Management at process level, you must configure swimlane permissions at the process settings level. Each swimlane (e.g., BackOffice, FrontOffice, Manager) should be assigned appropriate roles and permissions based on the responsibilities and access needs of each user group.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-14%20at%2012.44.21.png)
**Example permissions configuration**
Below are example configurations for swimlane permissions based on roles commonly used in a loan application approval process:
1. **BackOffice Swimlane**:
* Role: `FLOWX_BACKOFFICE`
* Permissions:
* **Unhold**: Allows the user to resume tasks that have been put on hold.
* **Execute**: Enables the user to perform task actions.
* **Self Assign**: Permits users to assign tasks to themselves.
* **View**: Grants viewing rights for tasks.
* **Hold**: Allows tasks to be temporarily paused.
2. **Manager (Supervisor) Swimlane**:
* Role: `FLOWX_MANAGER`
* Permissions:
* **Unhold**: Allows the user to resume tasks that have been put on hold.
* **Execute**: Enables the user to perform task actions.
* **Self Assign**: Permits users to assign tasks to themselves.
* **View**: Grants viewing rights for tasks.
* **Hold**: Allows tasks to be temporarily paused.
These permissions can be customized depending on each use case and organizational needs. Ensure that permissions are aligned with the roles' responsibilities within the workflow.
### Task assignment and reassignment
Consider this scenario: you're the HR manager overseeing the onboarding process for new employees. In order to streamline this operation, you've opted to leverage a task manager plugin. This process consists of two key phases: the Initiation Stage and the Account Setup Stage, each requiring a designated team member.
The Initiation Stage has successfully concluded, marking the transition to the Account Setup Stage. At this juncture, it's essential to reassign the task, originally assigned to John Doe, to Jane Doe, a valuable member of the backoffice team.
### Hold/unhold tasks
As a project manager overseeing various ongoing projects, you may need to temporarily pause one due to unforeseen circumstances. To manage this, you use the "On Hold" status.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/2024-11-13%2013.48.35.gif)
### Adding comments
When handling on-hold projects, document the reasons, inform the team, and plan for resumption. This pause helps address issues and ensures a smoother project flow upon resuming. Never forget to add comments:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-13%20at%2013.53.07.png)
### Viewing the application
Property used to point to the application url where the flowx process is loaded.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-13%20at%2017.06.08.png)
#### Process settings level
In the Process Definition settings, navigate to the **Task Management** tab and locate the **Application URL** field. Here, paste the application URL where the process is loaded, following this format:
```
{baseURL}/appId/buildId/processes/resourceId/instance
```
Example:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-13%20at%2017.45.08.png)
You can also use a predefined generic parameter as the URL: `${genericParameter}`.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-13%20at%2017.07.36.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-13%20at%2017.08.12.png)
#### Process data level
If task.baseUrl is specified in the process parameters, it will be sent to the Task Manager to update the tasks accordingly.
```java
output.put("task", {"baseUrl": "https://your_base_url"});
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/tsk_manager_base_url.png)
The `baseURL` set in the process data (business rules) will override the `baseURL` set in the process definition settings.
## Bulk updates
Send bulk update requests via Kafka (using Process Engine) to perform multiple operations at once. Use the Kafka topic:
* `KAFKA_TOPIC_PROCESS_OPERATIONS_BULK_IN` (defined in Process Engine) to send operations from "KAFKA\_TOPIC\_PROCESS\_OPERATIONS\_IN" as an array, allowing multiple operations at once. More details [**here**](../../../../../setup-guides/flowx-engine-setup-guide/engine-setup#topics-related-to-the-task-management-plugin).
Example of a bulk request:
```json
{
"operations": [
{
"operationType": "HOLD",
"taskId": "some task id",
"processInstanceUuid": "d3aabfd8-d041-4c62-892f-22d17923b223", // the id of the process instance
"swimlaneName": "Default", //name of the swimlane
"owner": null,
"author": "john.doe@flowx.ai",
"requestID": "1234567891"
},
{
"operationType": "HOLD",
"taskId": "some task id",
"processInstanceUuid": "d3aabfd8-d041-4c62-892f-22d17923b223",
"swimlaneName": "Default", //name of the swimlane
"owner": null,
"author": "jonh.doe@flowx.ai",
"requestID": "1234567890"
}
]
}
```
For more information on bulk updates configuration, see FlowX Engine setup:
### Full-Code implementation
For more customized UX, the **full-code** implementation using the **Task Management SDKs (React and Angular)** allows developers to build custom tables, cards, or any other UI elements based on the views and columns configured in Task Management.
## FAQs
A: The format changes will only affect how the data is displayed, not how it's indexed.
A: Yes, you can always switch to full-code and create custom views or tables using the Task Management SDK.
A: To use data from a subprocess, you must send it to the parent process first. Subprocess keys are currently displayed in the task manager once indexed through the parent process.
# Using allocation rules
Allocation rules are meant to define when tasks should be auto-assigned to users when they reach a swimlane that has a specific role configured (for example, specific tasks will be assigned for the _front office_ and specific tasks for the _back office_ only).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-14%20at%2012.54.21.png)
Tasks will always be allocated depending on the users load (number of tasks) from current/other processes. If there are two or more users with the same number of assigned tasks, the task will be randomly assigned to one of them.
## Accessing allocation rules
To access the allocation rules, follow the next steps:
1. Open **FlowX Designer**.
2. Go to your **Application** and from the side menu, under **Task Management**, select the **Allocation rules** entry.
## Adding process and allocation rules
To add process and allocation rules, follow the next steps:
1. Click **Add process** button, in the top-right corner.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/2024-11-14%2013.01.30.gif)
2. Select a [**process definition**](../../../../building-blocks/process/process-definition) from the drop-down list.
3. Click **Add swimlane allocations button (+)** to add allocations.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-14%20at%2013.03.34.png)
If there are no users with execute rights in the swimlane you want to add (`hasExecute: false`), the following error message will be displayed:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-14%20at%2013.06.00.png)
4. **Option 1**: Allocate all users with `execute rights`.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/allocate_execute_rights.png)
5. **Option 2**: Allocate only users you choose from the drop-down list. You can use the search function to filter users by name.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/allocate_execute_rights1.png)
6. Click **Save**.
Users with out-of-office status will be skipped by automatic allocation. More information about out-of-office feature, [here](using-out-of-office-records).
## Editing allocation rules
To edit allocation rules, follow the next steps:
1. Click **Edit** button.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-14%20at%2013.14.12.png)
2. Change the allocation method.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/change_allocation_method.gif)
3. Click **Save**.
### Viewing allocation rules
The allocation rules list displays all the configured swimlanes grouped by process:
1. **Process** - the process definition name where the swimlanes were configured
2. **Swimlane** - the name of the swimlane
3. **Allocation** - applied allocation rules
4. **Edited at** - the last time when an allocation was edited
5. **Edited by** - the user who edited/created the allocation rules
## Exporting/importing process allocation rules
To copy process allocation rules and move them between different environments, you can use the export/import feature.
You can export process allocation rules as JSON files directly from the allocation rules list:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-14%20at%2013.17.51.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-11-14%20at%2013.19.39.png)
# Using hooks
Hooks allow you to extract stateful logic from a component, so it can be tested and reused independently.
Users with task management permissions can create hooks to trigger specific **process instances**, such as sending notifications when **events** occur. Follow the instructions below to set up roles for hooks scope usage:
![Hooks](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/hooks.png)
Hooks can be linked to different events and define what will happen when they are triggered. Below you can find a list of all possible triggers for each hook.
unique result
only one rule will match, or no rule
rule outputs are prioritized
rules may overlap, but only match with the highest output priority counts
unique results
multiple rules can be satisfied
all satisfied rules must generate the same output, otherwise the rule is violated
## Creating a hook
To create a new hook, follow the next steps:
1. Open **FLOWX.AI Designer**.
2. Go to Task Manager and select **Hooks**.
3. Click **New Hook** (you can also import or export a hook).
4. Fill in the required details.
![Create a new hook](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/creating_a_hook.png)
## Types of hooks
There are three types of hooks you can create in Task Manager:
* process hooks
* swimlane hooks
* stage hooks
Swimlane and stage hooks can be configured with an SLA (time when a triggered process is activated).
![SLA hooks](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/hook_types.png)
Dismiss SLA is available only for hooks configured with SLA.
[Here](../../../../building-blocks/node/timer-events/timer-expressions) you can find more information about the SLA - duration formatting.
# Using out of office records
The Out-of-office feature allows you to register users availability to perform a task. It can be allocated manually or automatically.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/out_of_office_records.png)
Users with out-of-office status are excluded from the candidates for automatic task allocation list during the out-of-office period. More information about allocation rules, [here](./using-allocation-rules).
## Accessing out-of-office records
To add out-of-office records, follow the next steps:
1. Open **FlowX Designer**.
2. From the side menu, under **Task Management,** select the **Out office entry**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/access_out_of_office.png)
## Adding out-of-office records
To add out-of-office records, follow the next steps:
1. Click **Add out-of-office** button, in the top-right corner.
2. Fill in the following mandatory details:
* Assignee - user single select
* Start Date (:exclamation:cannot be earlier than tomorrow)
* End Date (:exclamation:cannot be earlier than tomorrow)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/add_out_of_office.png)
3. Click **Save**.
## Editing out-of-office records
To edit out-of-office records, follow the next steps:
1. Click **Edit** button.
2. Modify the dates (:exclamation:cannot be earlier than tomorrow).
3. Click **Save**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/edit_out_of_office.png)
## Deleting out-of-office records
To delete out-of-office records, follow the next steps:
1. From the **out-of-office list**, select a **record**.
2. Click **Delete** button. A pop-up message will be displayed: *"By deleting this out-of-office record, the user will become eligible to receive tasks in the selected period. Do you want to proceed?"*
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/delete_out_of_office.png)
If you choose to delete an out-of-office record, the user is eligible to receive tasks allocation during the mentioned period. More information about automatic task allocation, [here](./using-allocation-rules).
3. Click **Yes, proceed** if you want to delete the record, click **Cancel** if you want to abort the deletion.
If the out-of-office period contains days selected in the past, the user cannot delete the record, the following message is displayed: *“You can’t delete this record because it already affected allocations in the past. Try to shorten the period, if it didn’t end.”*
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/cant_delete_ooo.png)
## Viewing out-of-office records
The out-of-office records list contains the following elements:
1. **User** - firstName, lastName, userName
2. **Start Date** - the date when the out-of-office period will be effective
3. **End Date** - the date when the out-of-office period will end
4. **Edited at** - the last time when an out-of-office record was edited
5. **Edited by** - the user who edited/created the out-of-office record
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/view_ooo.png)
The list is sorted in reverse chronological order by “edited at” `dateTime` (newest added on top).
# Using stages
You can define specific stages during the execution of a process. Stages are configured on each node and they will be used to trigger an event when passing from one stage to another.
## Creating a new stage
To create a new stage, follow the next steps:
1. Open **FlowX Designer**.
2. Go to Task Manager and select **Stages**.
3. Click **New Stage.**
4. Fill in the required details.
![Create new stage](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/stages_add_new.png)
## Assigning a node to a stage
To assign a node to a stage, follow the next steps:
1. Open **FlowX Designer** and then select your **process**.
2. Choose the node you want to assign and select the **Node Config** tab.
3. Scroll down until you find the **Stage** field and click the dropdown button.
4. Choose the stage you want to assign.
![Node Assigning](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/stages_node_assigning.png)
# null
Connectors are the vital gateway to enhancing FlowX.AI's capabilities. They seamlessly integrate external systems, introducing new functionalities by operating as independently deployable, self-contained microservices.
## Connector essentials
At its core, a connector acts as an anti-corruption layer. It manages interactions with external systems and crucial data transformations for integrations.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/connector_structure.png)
## Key Functions
Connectors act as lightweight business logic layers, performing essential tasks:
1. **Data Transformation**: Ensure compatibility between different data formats, like date formats, value lists, and units.
2. **Information Enrichment:** Add non-critical integration information like flags and tracing GUIDs.
## Creating a connector
1. **Create a Kafka Consumer:** Follow [**this guide**](./creating-a-kafka-consumer) to configure a Kafka consumer for your Connector.
2. **Create a Kafka Producer:** Refer to [**this guide**](./creating-a-kafka-producer) for instructions on setting up a Kafka producer.
Adaptable Kafka settings can yield advantageous event-driven communication patterns. Fine-tuning partition counts and consumers based on load testing is crucial for optimal performance.
### Design considerations
Efficient Connector design within an event-driven architecture demands:
* Load balancing solutions for varying communication types between the Connector and legacy systems.
* Custom implementations for request load balancing, Connector scaling, and more.
Incorporate all received Kafka headers in responses to ensure seamless communication with the FlowX Engine.
### Connector configuration sample
Here's a basic setup example for a connector:
* Configurations and examples for Kafka listeners and message senders.
* **OPTIONAL**: Activation examples for custom health checks.
[Sample available here](https://github.com/flowx-ai/quickstart-connector/tree/feature/easy-start)
Follow these steps and check the provided code snippets to effectively implement your custom FLOWX connector:
1. **Name Your Connector**: Choose a meaningful name for your connector service in the configuration file (`quickstart-connector/src/main/resources/config/application.yml`):
```yaml
spring:
application:
name: easy-connector-name # TODO 1. Choose a meaningful name for your connector service.
jackson:
serialization:
write_dates_as_timestamps: false
fail-on-empty-beans: false
```
2. **Select Listening Topic:** Decide the primary topic for your connector to listen on ( you can do this at the following path → `quickstart-connector/src/main/resources/config/application-kafka.yml`):
If the connector needs to listen to multiple topics, ensure you add settings and configure a separate thread pool executor for each needed topic (refer to `KafkaConfiguration`, you can find it at `quickstart-connector/src/main/java/ai/flowx/quickstart/connector/config/KafkaConfiguration.java`).
3. **Define Reply Topic**: Determine the reply topic, aligning with the Engine's topic pattern.
4. **Adjust Consumer Threads**: Modify consumer thread counts to match partition numbers.
```yaml
kafka:
consumer.threads: 3 # TODO 4. Adjust number of consumer threads. Make sure number of instances * number of threads = number of partitions per topic.
auth-exception-retry-interval: 10
topic:
in: ai.flowx.easy-connector.in # TODO 2. Decide what topic should the connector listen on.
out: ai.flowx.easy-connector.out # TODO 3. Decide what topic should the connector reply on (this topic name must match the topic pattern the Engine listens on).
```
5. **Define Incoming Data Format (DTO)**: Specify the structure for incoming and outgoing data using DTOs. This can be found at the path: `quickstart-connector/src/main/java/ai/flowx/quickstart/connector/dto/KafkaRequestMessageDTO.java`.
```java
//Example for incoming DTO Format
package ai.flowx.quickstart.connector.dto;
import lombok.Getter;
import lombok.Setter;
import lombok.ToString;
@Getter
@Setter
@ToString
public class KafkaRequestMessageDTO { // TODO 5. Define incoming DTO format.
private String Id;
}
```
6. **Define Outgoing Data Format (DTO)**: Specify the structure for outgoing data at the following path → `quickstart-connector/src/main/java/ai/flowx/quickstart/connector/dto/KafkaResponseMessageDTO.java`.
```java
// Example for Outgoing DTO Format
package ai.flowx.quickstart.connector.dto;
import lombok.Builder;
import lombok.Getter;
import lombok.Setter;
import lombok.ToString;
@Getter
@Setter
@ToString
@Builder
public class KafkaResponseMessageDTO implements BaseApiResponseDTO { // TODO 6. Define outgoing DTO format.
private String name;
private String errorMessage;
}
```
7. **Implement Business Logic**: Develop logic for handling messages from the Engine and generating replies. Ensure to include the process instance UUID as a Kafka message key.
Optional Configuration Steps:
* **Health Checks:** Enable health checks for all utilized services in your setup.
```yaml
management: # TODO optional: enable health check for all the services you use in case you add any
health:
kafka.enabled: false
```
Upon completion, your configuration files (`application.yaml` and `application-kafka.yaml`) should resemble the provided samples, adjusting settings according to your requirements:
```yaml
logging:
level:
ROOT: INFO
ai.flowx.quickstart.connector: INFO
io.netty: INFO
reactor.netty: INFO
jdk.event.security: INFO
server:
port: 8080
spring:
application:
name: easy-connector-name
jackson:
serialization:
write_dates_as_timestamps: false
fail-on-empty-beans: false
management:
health:
kafka.enabled: false
spring.config.import: application-kafka.yml
logging.level.ROOT: DEBUG
logging.level.ai.flowx.quickstart.connector: DEBUG
```
And your Kafka configuration file (`application-kafka.yaml`) should look like this:
```yaml
spring:
kafka:
bootstrap-servers: localhost:9092
security.protocol: "PLAINTEXT"
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
interceptor:
classes: io.opentracing.contrib.kafka.TracingProducerInterceptor
message:
max:
bytes: ${KAFKA_MESSAGE_MAX_BYTES:52428800} #50MB
max:
request:
size: ${KAFKA_MESSAGE_MAX_BYTES:52428800} #50MB
consumer:
group-id: kafka-connector-group
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
properties:
interceptor:
classes: io.opentracing.contrib.kafka.TracingConsumerInterceptor
kafka:
consumer.threads: 3
auth-exception-retry-interval: 10
topic:
in: ai.flowx.easy-connector.in
out: ai.flowx.easy-connector.out
spring:
kafka:
security.protocol: "SASL_PLAINTEXT"
properties:
sasl:
mechanism: "OAUTHBEARER"
jaas.config: "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"${KAFKA_OAUTH_CLIENT_ID:kafka}\" oauth.client.secret=\"${KAFKA_OAUTH_CLIENT_SECRET:kafka-secret}\" oauth.token.endpoint.uri=\"${KAFKA_OAUTH_TOKEN_ENDPOINT_URI:kafka.auth.localhost}\" ;"
login.callback.handler.class: io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler
```
## Setting up the connector locally
For detailed setup instructions, refer to the Setting Up FLOWX.AI Quickstart Connector Readme:
Prerequisites:
* a terminal to clone the GitHub repository
* a code editor and IDE
* JDK version 17
* the Docker Desktop app
* an internet browser
## Integrating a connector in FLOWX.AI Designer
To integrate and utilize the connector within FLOWX.AI Designer, follow these steps:
1. **Process Designer Configuration**: Utilize the designated communication nodes within the [Process Designer](../../building-blocks/process/process):
* [**Send Message Task**](../../building-blocks/node/message-send-received-task-node#message-send-task): Transmit a message to a topic monitored by the connector. Make sure you choose **Kafka Send Action** type.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/connector_topic.png)
* [**Receive Message Task**](../../building-blocks/node/message-send-received-task-node#message-receive-task): Await a message from the connector on a topic monitored by the engine.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/connector_topic_out.png)
2. **Connector Operations**: The connector identifies and processes the incoming message.
3. **Handling Response**: Upon receiving a response, the connector serializes and deposits the message onto the specified OUT topic.
4. **Engine Processing**: The engine detects the new message, captures the entire content, and stores it within its variables based on the configured variable settings.
You can check another example of a more complex connector by checking the following repository:
# Creating a Kafka consumer
This guide focuses on creating a **Kafka** consumer using Spring Boot.
Here are some tips, including the required configurations and code samples, to help you implement a Kafka consumer in Java.
## Required dependencies
Ensure that you have the following dependencies in your project:
```xml
org.springframework.kafkaspring-kafkaio.strimzikafka-oauth-client0.6.1org.apache.kafkakafka-clients2.5.1io.opentracing.contribopentracing-kafka-client0.1.13
```
## Configuration
Ensure that you have the following configuration in your `application.yml` or `application.properties` file:
```yaml
spring:
kafka:
bootstrap-servers: localhost:9092
security.protocol: "PLAINTEXT"
consumer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
max:
partition:
fetch:
bytes: ${KAFKA_MESSAGE_MAX_BYTES:52428800} #50MB
kafka:
consumer:
group-id:
...
threads:
...
```
## Code sample for a Kafka Listener
Here's an example of a Kafka listener method:
```java
@KafkaListener(topics = "TOPIC_NAME_HERE")
public void listen(ConsumerRecord record) throws JsonProcessingException {
SomeDTO request = objectMapper.readValue(record.value(), SomeDTO.class);
// process received DTO
// Make sure to replace *"TOPIC_NAME_HERE"* with the actual name of the Kafka topic you want to consume from. Additionally, ensure that you have the necessary serialization and deserialization logic based on your specific use case.
}
```
# Creating a Kafka producer
This guide focuses on creating a **Kafka** producer using Spring Boot.
Here are some tips, including the required configurations and code samples, to help you implement a Kafka producer in Java.
## Required dependencies
Ensure that you have the following dependencies in your project:
```xml
org.springframework.kafkaspring-kafkaio.strimzikafka-oauth-client0.6.1org.apache.kafkakafka-clients2.5.1io.opentracing.contribopentracing-kafka-client0.1.13
```
## Configuration
Ensure that you have the following configuration in your `application.yml` or `application.properties` file:
```yaml
spring:
kafka:
bootstrap-servers: localhost:9092
security.protocol: "PLAINTEXT"
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
properties:
message:
max:
bytes: ${KAFKA_MESSAGE_MAX_BYTES:52428800} #50MB
max:
request:
size: ${KAFKA_MESSAGE_MAX_BYTES:52428800} #50MB
```
## Code sample for a Kafka producer
Ensure that you have the necessary KafkaTemplate bean autowired in your producer class. The sendMessage method demonstrates how to send a message to a Kafka topic with the specified headers and payload. Make sure to include all the received Kafka headers in the response that is sent back to the **FlowX Engine**.
```java
private final KafkaTemplate kafkaTemplate;
public void sendMessage(String topic, Headers headers, Object payload) {
ProducerRecord producerRecord = new ProducerRecord<>(topic, payload);
// make sure to send all the received headers back to the FlowX Engine
headers.forEach(header -> producerRecord.headers().add(header));
kafkaTemplate.send(producerRecord);
}
```
# Integration Designer
The Integration Designer simplifies the integration of FlowX with external systems using REST APIs. It offers a user-friendly graphical interface with intuitive drag-and-drop functionality for defining data models, orchestrating workflows, and configuring system endpoints.
Unlike [Postman](https://www.postman.com/), which focuses on API testing, the Integration Designer automates workflows between systems. With drag-and-drop ease, it handles REST API connections, real-time processes, and error management, making integrations scalable and easy to mantain.
## Overview
Integration Designer facilitates the integration of the FlowX platform with external systems, applications, and data sources.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/intg_designer.png)
Integration Designer focuses on REST API integrations, with future updates expanding support for other protocols.
***
## Key features
You can easily build complex API workflows using a drag-and-drop interface, making it accessible for both technical and non-technical audience.
Specifically tailored for creating and managing REST API calls through a visual interface, streamlining the integration process without the need for extensive coding.
Allows for immediate testing and validation of REST API calls within the design interface.
***
## Managing integration endpoints
### Systems
A system is a collection of resources—endpoints, authentication, and variables—used to define and run integration workflows.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/systems_overview.png)
### Creating a new system definition
With **Systems** feature you can create, update, and organize endpoints used in API integrations. These endpoints are integral to building workflows within the Integration Designer, offering flexibility and ease of use for managing connections between systems. Endpoints can be configured, tested, and reused across multiple workflows, streamlining the integration process.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/systems.png)
Go to the **Systems** section in FlowX Designer at **Projects** -> **Your application** -> **Integrations** -> **Systems**.
1. Add a **New System**, set the system’s unique code, name, and description:
* **Code**: A unique identifier for the external system.
* **Name**: The system's name.
* **Description**: A description of the system and its purpose.
2. Configure the system’s **Base URL**.
To dynamically adjust the base URL based on the upper environment (e.g., dev, QA, stage), you can use environment variables and configuration parameters. For example: `https://api.${environment}.example.com/v1`.
Additionally, keep in mind that the priority for determining the configuration parameter (e.g., base URL) follows this order: first, input from the user/process; second, environment variables; and lastly, configuration parameters.
3. Set up authorization (Service Token, Bearer Token, or No Auth).
### Defining REST integration endpoints
In this section you can define REST API endpoints that can be reused across different workflows.
1. Under the **Endpoints** section, add the necessary endpoints for system integration.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/add_new_endpoint.png)
2. Configure an endpoint by filling in the following properties:
* **Method**: GET, POST, PUT, PATCH, DELETE.
* **Path**: Path for the endpoint.
* **Parameters**: Path, query, and header parameters.
* **Response**: Expected response codes and formats.
* **Body**: JSON payload for requests.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/configure_endpoint.png)
### Defining variables
The Variables tab allows you to store system-specific variables that can be referenced throughout workflows using the format `${variableName}`.
These declared variables can be utilized not only in workflows but also in other sections, such as the Endpoint or Authorization tabs.
For example:
* You can declare a variable to store your authentication token and reference it in the **Authorization** tab.
* Use variables in the **Base URL** to switch between different environments, such as UAT or production.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/variables_auth.gif)
### Endpoint parameter types
When configuring endpoints, several parameter types help define how the endpoint interacts with external systems. These parameters ensure that requests are properly formatted and data is correctly passed.
#### Path parameters
Elements embedded directly within the URL path of an API request that acts as a placeholder for specific value.
* Used to specify variable parts of the endpoint URL (e.g., `/users/{userId}`).
* Defined with `${parameter}` format.
* Mandatory in the request URL.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/path_parameters.png)
Path parameters must always be included, while query and header parameters are optional but can be set as required based on the endpoint’s design.
#### Query parameters
Query parameters are added to the end of a URL to provide extra information to a web server when making requests.
* Query parameters are appended to the URL after a `?` symbol and are typically used for filtering or pagination (e.g., `?search=value`)
* Useful for filtering or pagination.
* Example URL with query parameters: [https://api.example.com/users?search=johndoe\&page=2](https://api.example.com/users?search=johndoe\&page=2).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/Screenshot%202024-10-17%20at%2018.18.12.png)
These parameters must be defined in the Parameters table, not directly in the endpoint path.
To preview how query parameters are sent in the request, you can use the **Preview** feature to see the exact request in cURL format. This shows the complete URL, including query parameters.
#### Header parameters
Used to give information about the request and basically to give instructions to the API of how to handle the request
* Header parameters (HTTP headers) provide extra details about the request or its message body.
* They are not part of the URL. Default values can be set for testing and overridden in the workflow.
* Custom headers sent with the request (e.g., `Authorization: Bearer token`).
* Define metadata or authorization details.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/header_parameters1.png)
#### Body parameters
The data sent to the server when an API request is made.
* These are the data fields included in the body of a request, usually in JSON format.
* Body parameters are used in POST, PUT, and PATCH requests to send data to the external system (e.g., creating or updating a resource).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/body_param.png)
#### Response body parameters
The data sent back from the server after an API request is made.
* These parameters are part of the response returned by the external system after a request is processed. They contain the data that the system sends back.
* Typically returned in GET, POST, PUT, and PATCH requests. Response body parameters provide details about the result of the request (e.g., confirmation of resource creation, or data retrieval)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/response_body_param.png)
### Enum mapper
The enum mapper for the request body enables you to configure enumerations for specific keys in the request body, aligning them with values from the External System or translations into another language.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/enum_mapper.png)
On enumerations you can map both translation values from different languages or values for different source systems.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/Screenshot%202024-10-27%20at%2012.42.28.png)
Make sure you have the enumerations created with corresponding translations and system values values in your application already:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/2024-10-27%2012.41.08.gif)
Select whether to use in the integration the enumeration value corresponding to the External System or the translation into another language.
For translating into language a header parameter called 'Language' is required to specify the language for translation.
### Configuring authorization
* Select the required **Authorization Type** from a predefined list.
* Enter the relevant details based on the selected type (e.g., Realm and Client ID for Service Accounts).
* These details will be automatically included in the request headers when the integration is executed.
### Authorization methods
The Integration Designer supports several authorization methods, allowing you to configure the security settings for API calls. Depending on the external system's requirements, you can choose one of the following authorization formats:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/auth_type.png)
#### Service account
Service Account authentication requires the following key fields:
* **Identity Provider Url**: The URL for the identity provider responsible for authenticating the service account.
* **Client Id**: The unique identifier for the client within the realm.
* **Client secret**: A secure secret used to authenticate the client alongside the Client ID.
* **Scope**: Specifies the access level or permissions for the service account.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/auth_service_account.png)
When using Entra as an authentication solution, the **Scope** parameter is mandatory. Ensure it is defined correctly in the authorization settings.
#### Basic authentication
* Requires the following credentials:
* **Username**: The account's username.
* **Password**: The account's password.
* Suitable for systems that rely on simple username/password combinations for access.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/basic_auth.png)
#### Bearer
* Requires an **Access Token** to be included in the request headers.
* Commonly used for OAuth 2.0 implementations.
* Header Configuration: Use the format `Authorization: Bearer {access_token}` in headers of requests needing authentication.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/token_bearer.png)
* System-Level Example: You can store the Bearer token at the system level, as shown in the example below, ensuring it's applied automatically to future API calls:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/bearer.png)
Store tokens in a configuration parameter so updates propagate across all requests seamlessly when tokens are refreshed or changed.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/token_config_param.png)
#### Certificates
You might want to access another external system that require a certificate to do that. Use this setup to configure the secure communication with the system.
It includes paths to both a Keystore (which holds the client certificate) and a Truststore (which holds trusted certificates). You can toggle these features based on the security requirements of the integration.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/certificates_expand.gif)
When the Use Certificate option is enabled, you will need to provide the following certificate-related details:
* **Keystore Path**: Specifies the file path to the keystore, in this case, `/opt/certificates/testkeystore.jks`. The keystore contains the client certificate used for securing the connection.
* **Keystore Password**: The password used to unlock the keystore.
* **Keystore Type**: The format of the keystore, JKS or PKCS12, depending on the system requirements.
**Truststore credentials**
* **Truststore Path**: The file path is set to `/opt/certificates/testtruststore.jks`, specifying the location of the truststore that holds trusted certificates.
* **Truststore Password**: Password to access the truststore.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/certificates.png)
***
## Workflows
A workflow defines a series of tasks and processes to automate system integrations. Within the Integration Designer, workflows can be configured using different components to ensure efficient data exchange and process orchestration.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/workflow_ex.png)
### Creating a workflow
1. Navigate to Workflow Designer:
* In FlowX.AI Designer to **Projects -> Your application -> Integrations -> Workflows**.
* Create a New Workflow, provide a name and description, and save it.
2. Start to design your workflow by adding nodes to represent the steps of your workflow:
* **Start Node**: Defines where the workflow begins and also defines the input parameter for subsequent nodes.
* **REST endpoint nodes**: Add REST API calls for fetching or sending data.
* **Fork nodes (conditions)**: Add conditional logic for decision-making.
* **Data mapping nodes (scripts)**: Write custom scripts in JavaScript or Python.
* **End Nodes**: Capture output data as the completed workflow result, ensuring the process concludes with all required information.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/create_workflow.gif)
### Workflow nodes
Users can visually build workflows by adding various nodes, including:
* Workflow start node
* REST endpoint nodes
* Data mapping nodes (scripts)
* Fork nodes (conditions)
* End node
#### Worflow start node
The Start node is the default and mandatory first node in any workflow. It initializes the workflow and defines the input parameters defined on it for subsequent nodes.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/workflow_start_node.png)
The Start node defines the initial data model for the workflow. This input data model can be customized. You can enter custom JSON data by clicking inside the code editor and typing their input. This input data will be passed to subsequent nodes in the workflow.
For example, if you want to define a **first name** parameter, you can add it like this in the **Start Node**:
```json
{
"firstName": "John"
}
```
Later, in the body of a subsequent workflow node, you can reference this input using:
```json
{
"First Name": "${firstName}"
}
```
This ensures that the data from the Start node is dynamically passed through the workflow.
When you try to send input data from a process to a workflow, you can use the Start workflow node to map the data coming from a process and to send it acrross the entire workflow.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/start_input.png)
Make sure the data is also mapped in the **Start Integration Workflow** node action where you have the data.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/data_input_start.png)
Only one Start node is allowed per workflow. The Start node is always the first node in the workflow and cannot have any incoming connections. Its sole function is to provide the initial data for the workflow.
The Start node cannot be altered in name, nor can it be deleted from the workflow.
#### REST endpoint nodes
The REST endpoint node enables communication with external systems to retrieve or update data by making REST API calls. It supports multiple methods like GET, POST, PUT, PATCH, and DELETE. Endpoints are selected via a dropdown menu, where available endpoints are grouped by the system they belong to.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/add_rest_endpoint.gif)
The node is added by selecting it from the "Add Connection" dropdown in the workflow designer.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/rest_endpoint_add.png)
You can include multiple REST endpoint nodes within the same workflow, allowing for integration with various systems or endpoints.
Unlike some nodes, the Endpoint Call node can be run independently, making it possible to test the connection or retrieve data without executing the entire workflow.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/run_endpoint_node.gif)
**Input and output**
Each REST endpoint node includes some essential tabs:
* **Params**:
* **Response key**: The response from the endpoint node, including both data and metadata, is organized under a predefined response key.
* **Input**:
* This tab contains read-only JSON data that is automatically populated with the output from the previous node in the workflow.
* **Output**:
* It displays the API response in JSON format.
#### Condition (fork) nodes
The Condition node evaluates incoming data from a connected node based on defined logical conditions(if/else if with). It directs the workflow along different paths depending on whether the condition evaluates to TRUE or FALSE.
**Defining Conditions in JavaScript or Python**
Logical conditions for the Condition Node can be written in either JavaScript or Python, depending on the requirements of your workflow.
* If the condition evaluates to TRUE, the workflow follows the If path.
* If the condition evaluates to FALSE, it follows the Else if path.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/fork_condition_node.png)
You can include multiple Condition nodes within a single workflow, enabling the creation of complex branching logic and decision-making flows.
**Parallel processing and forking**
The Condition node can split the workflow into parallel branches, allowing for multiple conditions to be evaluated simultaneously. This capability makes it ideal for efficiently processing different outcomes at the same time.
#### Data mapping nodes (scripts)
The Script node allows you to transform and map data between different systems during workflow execution by writing and executing custom code in JavaScript or Python. It enables complex data transformations and logic to be applied directly within the workflow.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/script_node.png)
#### End node
The End node signifies the termination of a workflow's execution. It collects the final output and completes the workflow process.
Multiple End nodes can be included within a single workflow. This allows the workflow to have multiple possible end points based on different execution paths.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/end_1.png)
The End node automatically receives input in JSON format from the previous node, and you can modify this input by editing it directly in the code editor. If the node's output doesn't meet mandatory requirements, it will be flagged as an error to ensure all necessary data is included.
The output of the End node represents the final data model of the workflow once execution is complete.
#### Testing the workflow
You can always test your endpoints in the context of the workflow. Run the endpoints separately (where is the case or run the entire workflow).
#### Debugging
Use the integrated console after running each workflow (either if you test your workflow in the workflow designer or in a process definition). It provides useful info like logs, input and output data about eacg endpoint and other details like execution time etc.
### Workflow integration
Integrating workflows into a BPMN process allows for structured handling of tasks like user interactions, data processing, and external system integrations.
This is achieved by connecting workflow nodes to User Tasks and Service Tasks using the [**Start Integration Workflow**](../../building-blocks/actions/start-integration-workflow) action.
1. **Open the FlowX Process Designer**:
* Navigate to **Projects -> Your application -> Processes**.
* Create a new process or edit an existing one.
2. **Define the Data Model**:
Needed if you want to send data from your user task to the workflow.
* Establish the data model that will be shared between the process and the workflow.
* Ensure all necessary data fields that the workflow will use are included.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/data_model_int.png)
1. **Add a Task**:
* Insert a **User Task** or **Service Task** into your BPMN diagram.
* A **User Task** requires user input, while a **Service Task** can trigger automated actions without manual intervention.
2. **Configure Actions for the Task**:
* In the node config, add a **Start Integration Workflow** action.
* Select the target workflow you want to integrate. This links the task with the predefined workflow in the Integration Designer.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/add_action.png)
3. **Map the Payload**:
* If input data is defined in the **Start Node** of the workflow, it will be **automatically mapped** in the **Start Integration Workflow** action. Ensure that the workflow’s Start Node contains the fields you need.
* Additional payload keys and values can also be set up as needed to facilitate data flow from the process to the workflow.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/add_start_integration_workflow.gif)
1. **Add a Receive Message Task**:
* To handle data returned by the workflow, add a **Receive Message Task** in the BPMN diagram.
* This task captures the workflow’s output data, such as processing status or results sent via Kafka.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/receive_kafka_workflow.png)
2. **Set Up a Data Stream Topic**:
* In the **Receive Message Task**, select your workflow from the **Data Stream Topics** dropdown.
* Ensure that the workflow output data, including status or returned values, is accurately captured under a predefined key.
***
## Integration with external systems
This example demonstrates how to integrate FlowX with an external system, in this example, using Airtable, to manage and update user credit status data. It walks through the setup of an integration system, defining API endpoints, creating workflows, and linking them to BPMN processes in FlowX Designer.
Before going through this example of integration, we recommend:
* Create your own base and table in Airtable, details [here](https://www.airtable.com/guides/build/create-a-base).
* Check Airtable Web API docs [here](https://airtable.com/developers/web/api/introduction) to get familiarized with Airtable API.
### Integration in FlowX
Navigate to the **Integration Designer** and create a new system:
* Name: **Airtable Credit Data**
* **Base URL**: `https://api.airtable.com/v0/`
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/airtable1.png)
In the **Endpoints** section, add the necessary API endpoints for system integration:
1. **Get Records Endpoint**:
* **Method**: GET
* **Path**: `/${baseId}/${tableId}`
* **Path Parameters**: Add the values for the baseId and for the tableId so they will be available in the path.
* **Header Parameters**: Authorization Bearer token
See the [API docs](https://airtable.com/developers/web/api/list-records).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/airtable2.png)
2. **Create Records Endpoint**:
* **Method**: POST
* **Path**: `/${baseId}/${tableId}`
* **Path Parameters**: Add the values for the baseId and for the tableId so they will be available in the path.
* **Header Parameters**:
* `Content-Type: application/json`
* Authorization Bearer token
* **Body**: JSON format containing the fields for the new record. Example:
```json
{
"typecast": true,
"records": [
{
"fields": {
"First Name": "${firstName}",
"Last Name": "${lastName}",
"Age": ${age},
"Gender": "${gender}",
"Email": "${email}",
"Phone": "${phone}",
"Address": "${address}",
"Occupation": "${occupation}",
"Monthly Income ($)": ${income},
"Credit Score": ${creditScore},
"Credit Status": "${creditStatus}"
}
}
]
}
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/airtable3.png)
1. **Open the Workflow Designer** and create a new workflow.
* Provide a name and description.
2. **Configure Workflow Nodes**:
* **Start Node**: Initialize the workflow.
On the start node add the data that you want to extract from the process. This way when you will add the **Start Workflow Integration** node action it will be populated with this data.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/integration_designer/start_data.gif)
```json
{
"firstName": "${firstName}",
"lastName": "${lastName}",
"age": ${age},
"gender": "${gender}",
"email": "${email}",
"phone": "${phone}",
"address": "${address}",
"occupation": "${occupation}",
"income": ${income},
"creditScore": ${creditScore},
"creditStatus": "${creditStatus}"
}
```
Make sure this keys are also mapped in the data model of your process with their corresponding attributes.
* **REST Node**: Set up API calls:
* **GET Endpoint** for fetching records from Airtable.
* **POST Endpoint** for creating new records.
* **Condition Node**: Add logic to handle credit scores (e.g., triggering a warning if the credit score is below 300).
Condition example:
```json
input.responseKey.data.records[0].fields["Credit Score"] < 300
```
* **Script Node**: Include custom scripts if needed for processing data (not used in this example).
* **End Node**: Define the end of the workflow with success or failure outcomes.
1. **Integrate the workflow** into a BPMN process:
* Open the process diagram and include a **User Task** and a **Receive Message Task**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/bpmn_airtable.png)
In this example, we'll use a User Task because we need to capture user data and send it to our workflow.
2. **Map Data** in the **UI Designer**:
* Create the data model
* Link data attributes from the data model to form fields, ensuring the user input aligns with the expected parameters.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/data_model_id.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/ut_dat_input.gif)
3. **Add a Start Integration Workflow** node action:
* Make sure all the input will be captured.
**Receive Workflow Output**:
* Use the **Receive Message Task** to capture workflow outputs like status or returned data.
* Set up a **Data stream topic** to ensure workflow output is mapped to a predefined key.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/id_kafka_receive.png)
* Start your process to initiate the workflow integration. It should add a new user with the details captured in the user task.
* Check if it worked by going to your base in Airtable. You can see, our user has been added.
***
This example demonstrates how to integrate Airtable with FlowX to automate data management. You configured a system, set up endpoints, designed a workflow, and linked it to a BPMN process.
## FAQs
**A:** Currently, the Integration Designer only supports REST APIs, but future updates will include support for SOAP and JDBC.
**A:** The Integration Service handles all security aspects, including certificates and secret keys. Authorization methods like Service Token, Bearer Token, and OAuth 2.0 are supported.
**A**: Errors are logged within the workflow and can be reviewed in the monitoring dedicated console for troubleshooting and diagnostics
**A**: Currently, the Integration Designer only supports adding endpoint specifications manually. Import functionality (e.g., importing configurations from sources like Swagger) is planned for future releases.
For now, you can manually define your endpoints by entering the necessary details directly in the system.
# Overview
Integrations play a crucial role in connecting legacy systems or third-party applications to the FlowX Engine. They enable seamless communication by leveraging custom code and the Kafka messaging system.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/3.5/custom_intg.svg)
Integrations serve various purposes, including working with legacy APIs, implementing custom file exchange solutions, or integrating with RPAs.
#### High-level architecture
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/intgr_final.png)
Integrations involve interaction with legacy systems and require custom development to integrate them into your FLOWX.AI setup.
## Developing a custom integration
Developing custom integrations for the FlowX.AI platform is a straightforward process. You can use your preferred technology to write the necessary custom code, with the requirement that it can send and receive messages from the **Kafka** cluster.
#### Steps to create a custom integration
Follow these steps to create a custom integration:
1. Develop a microservice, referred to as a "Connector," using your preferred tech stack. The Connector should listen for Kafka events, process the received data, interact with legacy systems if required, and send the data back to Kafka.
2. Configure the [process definition](../../building-blocks/process/process-definition) by adding a [message](../../building-blocks/node/message-send-received-task-node) send action in one of the [nodes](../../building-blocks/node/node). This action sends the required data to the Connector.
3. Once the custom integration's response is ready, send it back to the FLOWX.AI engine. Keep in mind that the process will wait in a receive message node until the response is received.
For Java-based Connector microservices, you can use the following startup code as a quickstart guide:
## Managing an integration
#### Managing Kafka topics
It's essential to configure the engine to consume events from topics that follow a predefined naming pattern. The naming pattern is defined using a topic prefix and suffix, such as "*ai.flowx.dev.engine.receive*."
We recommend the following naming convention for your topics:
```yaml
topic:
naming:
package: "ai.flowx."
environment: "dev."
version: ".v1"
prefix: ${kafka.topic.naming.package}${kafka.topic.naming.environment}
suffix: ${kafka.topic.naming.version}
engineReceivePattern: engine.receive
pattern: ${kafka.topic.naming.prefix}${kafka.topic.naming.engineReceivePattern}*
```
# Mock integrations
If you need to test the business process flow but haven't completed all integrations, you can still do so by utilizing the mock integrations server included in the platform.
## Setup
To begin, configure the microservice's DB settings to use a Postgres DB. Then, deploy the mocked adapter microservice.
## Adding a new integration
Setting up a mocked integration requires only one step: adding a mock Kafka request and response.
You have two options for accomplishing this:
1. Add the information directly to the DB.
2. Use the provided [**API**](/4.0/docs/api/add-kafka-mock).
For each Kafka message exchange between the engine and the integration, you need to create a separate entry.
Check out the [**Add new exchange Kafka mock**](/4.0/docs/api/add-kafka-mock) API reference for more details.
Check out the [**View all available Kafka exchanges**](/4.0/docs/api/add-kafka-mock) API reference for more details.
# Observability with OpenTelemetry
## What is Observability?
Observability is the capacity to infer the internal state of a system by analyzing its external outputs. In software development, this entails understanding the internal workings of a system through its telemetry data, which comprises traces, metrics, and logs.
## What is Open Telemetry?
OpenTelemetry is an observability framework and toolkit for generating and managing telemetry data, including traces, metrics, and logs. It is vendor-agnostic and compatible with various observability backends like Jaeger and Prometheus. Unlike observability backends, OpenTelemetry focuses on the creation, collection, and export of telemetry data, leaving storage and visualization to other tools.
Tracing with Open Telemetry is availabile starting with FlowX.AI v.4.1.0 release.
## How it works?
Our monitoring and performance analysis system leverages OpenTelemetry for comprehensive tracing and logging across our microservices architecture. By integrating with Grafana and other observability tools, we achieve detailed visibility into the lifecycle of requests, the performance of individual operations, and the interactions between different components of the system.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/otel_hla.drawio.png)
OTEL Collectors are designed in a vendor-agnostic way to receive, process and export telemetry data. More information about OTEL Collectors, you can find [**here**](https://opentelemetry.io/docs/collector/).
Recommended OpenTelemetry Collector Processors: Follow the [**recommended processors**](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor#recommended-processors).
## Prerequisites
### Microservices
* Custom code addition for manual instrumentation.
* Configuration and deployment of Java agent.
* Performance impact assessment.
### Kubernetes
* Use of a Kubernetes Operator for managing instrumentation and tracing configuration.
## Instrumentation
### Auto-instrumentation with Java agent
* **Works**: Automatically wraps methods at the application edges (HTTP calls, Kafka messages, DB calls), creating spans and adding default span attributes.
* **Configuration**: Configure the Java agent for auto-instrumentation.
### Manual instrumentation
* **Custom Spans**: These were created for methods important to the business flow and enriched with business attributes such as `fx.type`, `fx.methodName`, `fx.processInstanceUuid`, and others.
* **Custom BUSINESS Spans**: Create spans for business events.
## Business logic metadata in logs and spans
Spans now include custom FlowX attributes (e.g., node names, action names, process names, instance UUIDs), which ccan be used for filtering and searching in traces.
Here is the full list of included custom FlowX span attributes:
### Custom span attributes
* fx.type - BUSINESS/TECHNICAL
* fx.methodName
* fx.parentProcessInstanceId
* fx.parentProcessInstanceUuid
* fx.processInstanceUuid
* fx.processName
* fx.processVersionId
* fx.tokenInstanceUuid
* fx.nodeName
* fx.nodeId
* fx.nodeUuid
* fx.boundaryEventId
* fx.nextNodeId
* fx.triggeredByBoundaryEventId
* fx.actionUuid
* fx.actionName
* fx.context
* fx.platform
### Custom business spans
* identified by the `fx.type = BUSINESS` attribute
### Detailed trace operations
Trace specific operations and measure request time across different layers/services.
* **Process Start**: Auto-instrumentation enabled for Spring Data to show time spent in repository methods. JDBC query instrumentation can be added.
* **Token Creation and Advancing**: Custom tracing added.
* **Action Execution and Subprocess Start**: Custom tracing added.
## Troubleshooting scenarios and common usages
### Scenario examples
* **Process Trace**: Analyze DB vs cache times, token advancement, node actions.
* **Parallel Gateway**: Trace split tokens.
* **DB Query Time**: Enable JDBC query tracing.
* **Endpoint Data Issues**: Check traces for Redis or DB source.
* **Token Stuck**: Filter by node name and process UUID.
* **Action Execution**: Trace action names for stuck tokens.
* **Subprocess Failures**: Analyze subprocess start and failures.
* **Latency Analysis**: Identify latencies in automatic actions.
* **Boundary Events**: Ensure Kafka schedule messages are sent and received correctly.
* **External Service Tracking**: Trace between process engine and external plugins.
### Business operation analysis
* **Long Running Operations**: Use Uptrace for identifying slow operations.
* **Failed Requests**: Filter traces by error status.
### Visualization of Traces
We recommend to use Grafana, but any observability platform compatible with open telemetry standards can be used.
Grafana integrates with tracing backends such as Tempo (for tracing) and Loki (for logging), allowing us to visualize the entire lifecycle of a request. This includes detailed views of spans, which are the basic units of work in a trace. By using Grafana, we can:
* **View Trace Trees**: Grafana provides an intuitive UI for viewing the hierarchy and relationships between spans, making it easier to understand the flow of a request through the system.
* **Filter and Search**: Use Grafana to filter and search spans based on custom attributes like `fx.processInstanceUuid`, `fx.nodeName`, `fx.actionName`, and others. This helps in pinpointing specific operations or issues within a trace.
* **Error Analysis**: Identify spans with errors and visualize the stack trace or error message, aiding in quick troubleshooting.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/custom_spans.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/custom_spans1.png)
Resources:
# FlowX AI Agents
Get ready to revolutionize your journey with our upcoming AI-powered agents.
Want to learn more? Reach out to us and discover how we can make your experience smoother than ever before!
Contact
# FlowX custom plugins
Adding new capabilities to the core platform can be easily done by using plugins. FlowX plugins represent already-built functionality that can be added to a FlowX.AI platform deployment.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/plugins_40.png)
These could be either one of the provided custom **plugins** that we've already built or building your desired plugin.
On our roadmap, we’re also looking to enhance the **plugins library** with 3rd party providers, so stay tuned for more.
## High-level architecture
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/plugins_diagram.png)
The plugins are microservice apps that can be developed using any tech stack. The only requirement is that they need to be able to connect to the core platform using Kafka events.
To interact with plugins, you need to understand a few details about them:
* the events that can trigger them
* the infrastructure components needed
* the needed configurations
## Custom plugins
The currently available plugins are:
* [**Documents**](./custom-plugins/documents-plugin/documents-plugin-overview) - easily generate, host and access any kind of documents
* [**Notifications**](./custom-plugins/notifications-plugin/notifications-plugin-overview) - enhance your project with the option of sending custom emails or SMS notifications
* [**OCR**](./custom-plugins/ocr-plugin) - helps you scan your documents and integrate them into a business process
* [**Task management**](./custom-plugins/task-management/task-management-overview) - a plugin suitable for back-officers and supervisors as it can be used to easily track and assign activities/tasks inside a company.
* [**Reporting**](./custom-plugins/reporting/reporting-overview) - a plugin that will help you create and bootstrap custom reports built on generic information about usage and processes metrics
Let's get into a bit more detail about the custom plugins 🎛️
## Document management plugin
**Effortless document generation and safe-keeping**
The document management plugin securely stores documents, facilitates document generation based on predefined templates and also handles conversion between various document formats.
It offers an easy-to-use interface for handling documents on event-based Kafka streams.
![high level architecture](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/document_service_architecture.svg)
## Notifications plugin
**Multi-channel notifications made easy**
The plugin handles various types of notifications:
* SMS (if a third party service is available for communication management)
* email notifications
* generating and validating OTP passwords for **user identity verification**
It can also be used to forward custom notifications to external outgoing services. It offers an intuitive interface for defining templates for each kind of notification and handles sending and auditing notifications easily.
![high level architecture](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/custom_plugins_architecture.svg)
## Task management
**Helper for back-officers and supervisors, easy track, assignment management**
The Task Management plugin has the scope to show a process that you defined using FlowX Designer, using a more business-oriented view. It also offers interactions at the assignment level.
## Customer management
**Convenient and secure access to user data**
Light CRM uses an Elasticsearch engine to retrieve user details using partial matches on intricate databases.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/crm_plugin_archi.svg)
## OCR plugin
**Automatic key information extraction**
Used to easily read barcodes or extract handwritten signatures from PDF documents.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/ocr_plugin_archi.svg)
## Reporting plugin
**Easy-to-read dynamic dashboards**
Use reporting plugin to build and bootstrap custom reports built on generic information about usage and processes.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/reporting_diag.png)
# Converting files
Currently, the supported conversion method is limited to transforming **PDF** files into **JPEG** format.
This guide provides step-by-step instructions on how to convert an uploaded file (utilizing the provided example) from PDF to JPEG.
## Prerequisites
1. **Access Permissions**: Ensure that you have the necessary permissions to use the Documents Plugin. The user account used for these operations should have the required access rights.
2. **Kafka Configuration**: Verify that the Kafka messaging system is properly configured and accessible. The Documents Plugin relies on Kafka for communication between nodes.
* **Kafka Topics**: Familiarize yourself with the Kafka topics used for these operations (later in this section)
3. Before initiating the conversion process, it is essential to identify the file in the storage solution using its unique ID. This ensures that the conversion is performed on an already uploaded file.
You have two options to obtain the file ID:
* Extract the file ID from a [**Response Message**](./uploading-a-new-document) of an upload file request. For more details, refer to the [**upload process documentation**](./uploading-a-new-document).
* Extract the file ID from a [**Response Message**](./generating-from-html-templates) of a generate from template request. For more details, refer to the [**document generation reply documentation**](./generating-from-html-templates)
In the following example, we will use the `fileId` generated for [**Uploading a New Document**](./uploading-a-new-document) scenario.
```json
{
"customId": "119246",
"fileId": "96975e03-7fba-4a03-99b0-3b30c449dfe7",
"documentType": "BULK",
"documentLabel": null,
"minioPath": "flowx-dev-process-id-119246/119246/458_BULK.pdf",
"downloadPath": "internal/files/96975e03-7fba-4a03-99b0-3b30c449dfe7/download",
"noOfPages": null,
"error": null
}
```
## Configuring the process
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/convert_pdf_to_jpeg.png)
To create a process that converts a document from PDF to JPEG format, follow these steps:
1. Create a process that includes a [**Send Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node) node and a [**Receive Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node):
* Use the **Send Message Task** node to send the conversion request.
* Use the **Receive Message Task** node to receive the reply.
2. Configure the first node (**Send Message Task**) by adding a **Kafka send action**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/convert_action_name.png)
3. Specify the [**Kafka topic**](../../../../../setup-guides/flowx-engine-setup-guide/engine-setup#kafka-configuration) where you send the conversion request.
To identify your defined topics in your current environment, follow the next steps:
1. From the FLOWX.AI main screen, navigate to the **Platform Status** menu at the bottom of the left sidebar.
2. In the FLOWX Components list, scroll to the **document-plugin-mngt** line and press the eye icon on the right side.
3. In the details screen, expand the `KafkaTopicsHealthCheckIndicator` line and then **details → configuration → topic → file → convert**. Here will find the in and out topics for converting files.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/kakfa_topics_convert.png)
4. Fill in the body of the message request.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/convert_action.png)
#### Message request example
This is an example of a message that follows the custom integration data model.
```json
{
"fileId": "96975e03-7fba-4a03-99b0-3b30c449dfe7",
"to": "image/jpeg"
}
```
* `fileId`: The file ID that will be converted
* `to`: The file extension to convert to (in this case, "JPEG")
5. Configure the second node (**Receive Message Task**) by adding a **Data stream topic**:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/convert_stream.png)
The response will be sent to this `..out` Kafka topic.
## Receiving the reply
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/convert_response.png)
The following values are expected in the reply body:
* **customId**: The unique identifier for your document (it could be for example the ID of a client)
* **fileId**: The file ID
* **documentType**: The document type
* **documentLabel**: The document label (if available)
* **minioPath**: The path where the converted file is saved. It represents the location of the file in the storage system, whether it's a MinIO path or an S3 path, depending on the specific storage solution
* **downloadPath**: The download path for the converted file
* **noOfPages**: If applicable
* **error**: Any error message in case of an error during the conversion process
#### Message response example
```json
{
"customId": "119246",
"fileId": "8ec75c0e-eaa6-4d80-b7e5-15a68bba7459",
"documentType": "BULK",
"documentLabel": null,
"minioPath": "flowx-dev-process-id-119246/119246/461_BULK.jpg",
"downloadPath": "internal/files/461/download",
"noOfPages": null,
"error": null
}
```
The converted file is now available in the storage solution and it can be downloaded:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/jpg_final.png)
Note that the actual values in the response will depend on the specific conversion request and the document being converted.
# Deleting files
The Documents plugin provides functionality for deleting files.
## Prerequisites
Before deleting files, ensure:
1. **Access Permissions**: Ensure that the user account used has the necessary access rights for updates or deletions.
2. **Kafka Configuration**:
* **Verify Kafka Setup**: Ensure proper configuration and accessibility of the Kafka messaging system.
* **Kafka Topics**: Understand the Kafka topics used for these operations.
3. **File IDs and Document Types**: Prepare information for updating or deleting files:
* `fileId`: ID of the file to delete.
* `customId`: Custom ID associated with the file.
In the example below, we use a `fileId` generated for a document using [**Uploading a New Document**](/4.0/docs/platform-deep-dive/plugins/custom-plugins/documents-plugin/uploading-a-new-document) scenario.
```json
{
"docs": [
{
"customId": "119407",
"fileId": "c4e6f0b0-b70a-4141-993b-d304f38ec8e2",
"documentType": "BULK",
"documentLabel": null,
"minioPath": "flowx-dev-process-id-119408/119407/466_BULK.pdf",
"downloadPath": "internal/files/c4e6f0b0-b70a-4141-993b-d304f38ec8e2/download",
"noOfPages": 2,
"error": null
}
],
"error": null
}
```
## Configuring the deletion process
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/delete_file_proc.png)
To delete files, follow these steps:
1. Create a process that includes a [**Send Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node) node and [**Message Event Receive (Kafka)**](/4.0/docs/building-blocks/node/message-send-received-task-node#configuring-a-message-receive-task-node) node:
* Use the **Send Message Task** node to send the delete request.
* Use the **Receive Message Task** node to receive the delete reply.
2. Configure the **first node (Send Message Task)** by adding a **Kafka Send Action**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/delete_file_action.png)
3. Specify the [**Kafka topic**](../../../../../setup-guides/plugins-setup-guide/documents-plugin-setup) for sending the delete request.
To identify defined topics in your environment:
* Navigate to **Platform Status > FLOWX Components > document-plugin-mngt** and click the eye icon on the right side.
* In the details screen, expand the `KafkaTopicsHealthCheckIndicator` line and then **details → configuration → topic → file → delete**. Here will find the in and out topics for deleting files.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/delete_topics.png)
4. Fill in the request message body.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/delete_request_message.png)
#### Message request example
Example of a message following the custom integration data model:
```json
{
"customId": "119408",
"fileId": "c4e6f0b0-b70a-4141-993b-d304f38ec8e2"
}
```
* **fileId**: The ID of the file.
* **customId**: The custom ID.
5. Configure the **second node (Receive Message Task)** by adding a Data stream topic:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/delete_stream.png)
The response will be sent to `..out` Kafka topic.
### Receiving the reply
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/delete_response.png)
The reply body should contain the following values:
* **customId**: The unique identifier for your document (it could be for example the ID of a client)
* **fileId**: The ID of the file
* **documentType**: The document type
* **error**: Any error message in case of an error during the deleting process
#### Message response example
```json
{
"customId": "119408",
"fileId": "c4e6f0b0-b70a-4141-993b-d304f38ec8e2",
"documentType": null,
"error": null
}
```
# Documents plugin
The Documents plugin can be easily added to your custom FlowX.AI deployment to enhance the core platform capabilities with functionality specific to document handling.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/doc_plugin_general.png)
The plugin offers the following features:
* **Document storage and editing**: Easily store and make changes to documents.
* **Document generation**: Generate documents using predefined templates and custom process-related data.
* **WYSIWYG editor**: Create various templates using a user-friendly ["What You See Is What You Get" (WYSIWYG) editor](../../wysiwyg).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/doc_plugin_wysiwyg.png)
* **Template import**: Import templates created in other environments.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/doc_plugin_create_import.png)
When exporting a document template, it is transformed into a JSON file that can be imported later.
* **Document conversion**: Convert documents from PDF to JPEG format.
* **Document splitting**: Split bulk documents into smaller separate documents.
* **Document editing**: Add generated barcodes, signatures, and assets to documents.
* **OCR integration**: When a document requires OCR (Optical Character Recognitionq) processing, the Documents Plugin initiates the interaction by passing the document data or reference to the [**OCR plugin**](../ocr-plugin).
The Documents Plugin can be easily deployed on your chosen infrastructure, preloaded with industry-specific document templates using an intuitive WYSIWYG editor, and connected to the FLOWX Engine through Kafka events.
* [**Send Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node#message-send-task)
* [**Receive Message Task(Kafka)**](../../../../building-blocks/node/message-send-received-task-node#message-receive-task)
Performance considerations:
To ensure optimal performance while using Documents Plugin, consider the following recommendations:
* For large or complex documents, it is recommended to allocate sufficient system resources, such as CPU and memory, to handle the conversion/editing process efficiently.
* Avoid processing extremely large files or a large number of files simultaneously, as it may impact performance and responsiveness.
* Monitor system resources during the generating/editing/converting etc. process and scale resources as needed to maintain smooth operations.
* Following these performance considerations will help optimize the document processing and improve overall system performance.
## Using Documents plugin
Once you have deployed the Documents Plugin in your infrastructure, you can start creating various document templates. After selecting a document template, proceed to create a process definition by including [**Send Message/Receive Message**](../../../../building-blocks/node/message-send-received-task-node) (Kafka nodes) and custom document-related actions in your process flow.
Before adding these actions to your **process definition**, follow these steps:
1. Ensure that all custom information is properly configured in the plugin database, such as the document templates to be used.
2. For each event type, you will need a corresponding Kafka topic.
The `..in` topic names configured for the plugin should match [**the `..out` topic names used when configuring the engine**](../../../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka). Make sure to use an outgoing topic name that matches the pattern configured in the Engine. The value can be found and overwritten in the `KAFKA_TOPIC_PATTERN` variable.
For more details about Process Engine Kafka topic configuration, click [**here**](../../../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka).
To make a request to the plugin, the process definition needs to include an action of type **Kafka send** defined on a [**Send Message Task**](../../../../building-blocks/node/message-send-received-task-node#message-send-task) node. The action parameter should have the key `topicName` and the corresponding topic name as its value.
To receive a reply from the plugin, the process definition needs to include a [**Receive Message Task**](../../../../building-blocks/node/message-send-received-task-node#message-receive-task) node with a node value having the key `topicName` and the topic name as its value.
Once the setup is complete, you can begin adding custom actions to your processes.
Let's explore a few examples that cover both the configuration and integration with the engine for all the use cases supported by the plugin:
# Generating documents
One of the key features of the Documents plugin is the ability to generate new documents using custom templates, which can be pre-filled with data relevant to the current process instance.
These templates can be easily configured using the [**What you see is what you get** (WYSIWYG)](../../wysiwyg). You can create and manage your templates by accessing the **Document Templates** section in [**FlowX Designer**](../../../../flowx-designer/overview).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/docs_plugin_template.png)
## Generating documents from HTML templates
The Documents plugin simplifies the document generation process through predefined templates. This example focuses on generating documents using HTML templates.
## Prerequisites
1. **Access permissions**: Ensure that you have the necessary permissions to manage documents templates (more details, [**here**](../../../../../setup-guides/plugins-access-rights/configuring-access-rights-for-documents)). The user account used for these operations should have the required access rights.
2. **Kafka configuration**: Verify that the Kafka messaging system is properly configured and accessible. The documents plugin relies on Kafka for communication between nodes.
* **Kafka topics**: Familiarize yourself with the Kafka topics used for these operations (later in this section)
## Creating an HTML template
To begin the document generation process, HTML templates must be created or imported. Utilize the [**WYSIWYG**](/4.1.x/docs/platform-deep-dive/plugins/wysiwyg) editor accessible through **FLOWX Designer → Plugins → Document templates**.
Learn more about managing HTML templates:
Before using templates, ensure they are in a **Published** state. Document templates marked as **Draft/In Progress** will not undergo the generation process.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/ocr_doc_template.gif)
We've created a comprehensive FlowX.AI Academy course guiding you through the process of **Creating a Document Template in Designer**. Access the course [**here**](https://academy.flowx.ai/catalog/info/id:172) for detailed instructions and insights.
## Sending a document generation request
Consider a scenario where you need to send a personalized document to a customer based on specific details they provide. Create a process involving a [**User task**](../../../../building-blocks/node/user-task-node), a [**Send Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node#message-send-task), and a [**Receive Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node#message-receive-task).
In the initial user task node, users input information.
The second node (Kafka send) creates a request with a specified template and keys corresponding to user-filled values.
The third node sends the reply with the generated document under the specified key.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/doc_plugin_upload_prev_ex.png)
1. Add a **User task** and configure it with UI elements for user input.
In this example, three UI elements, comprising two input fields and a select (dropdown), will be used. Subsequently, leverage the keys associated with these UI elements to establish a binding with the template. This binding enables dynamic adjustments to the template based on user-input values, enhancing flexibility and customization.
2. Configure the second node (Send Message Task) by adding a **Kafka send action**.
3. Specify the [**Kafka topic**](../../../../../setup-guides/plugins-setup-guide/documents-plugin-setup#kafka-configuration) to which the request should be sent, enabling the Process Engine to process it; in our example it is `ai.flowx.in.document.html.in`.
To identify your defined topics in your current environment, follow the next steps:
1. From the **FlowX Designer** main screen, navigate to the **Platform Status** menu at the bottom of the left sidebar.
2. In the FlowX Components list, scroll to the **document-plugin-mngt** line and press the eye icon on the right side.
3. In the details screen, expand the `KafkaTopicsHealthCheckIndicator` line and then **details → configuration → topic → document → generate**. Under HTML and PDF you will find the in and out topics for generating HTML or PDF documents.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/kafka_topics_html_generate.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/generate_html.gif)
4. Fill in the message with the expected values in the request body:
```json
{
"documentList": [
{
"customId": "ClientsFolder",
"templateName": "AccountCreation",
"language": "en",
"data": {
"firstInput": "${application.client.firstName}",
"secondInput": "${application.client.lastName}",
"thirdInput": "${application.client.accountType}"
},
"includeBarcode": false //if you want to include a barcode, you can set it to true
}
]
}
```
* **documentList**: A list of documents to be generated with properties (name and value to be replaced in the document templates)
* **customId**: Client ID
* **templateName**: The name of the template that you want to use (defined in the **Document templates** section)
* **language**: Should match the language set on the template (a template can be created for multiple languages as long as they are defined in the system, see [**Languages**](/4.1.x/docs/platform-deep-dive/core-extensions/content-management/languages) section for more information)
When incorporating templates into the execution of a process, the extracted default values will be in accordance with the specifications of the default language configured in the system. For instance, if the default language is set to English, the template's default values will reflect those assigned to the English version. Make sure to match the language of your template with the default language of the system.
To verify the default language of the platform, navigate to **FlowX Designer → Content Management → Languages**.
* **includeBarcode**: True/False
* **data**: A map containing the values that should be replaced in the document template (data that comes from user input). The keys used in the map should match the ones defined in the HTML template and your UI elements.
Ultimately, the configuration should resemble the presented image:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/ceva_model.png)
5. Configure the third node (Receive Message Task):
* Add the topic where the response will be sent; in our example `ai.flowx.updates.document.html.generate.v1` and its key: `generatedDocuments`
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/generate_template_receive.png)
## Receiving the document generation reply
The response, containing information about the generated documents, is sent to the output Kafka topic defined in the Kafka Receive Event Node. The response includes details such as file IDs, document types, and storage paths.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/html_generated_response.png)
Here is an example of a response after generation (received on `generatedDocuments` key):
```json
{
"generatedFiles": {
"ClientsFolder": {
"AccountCreation": {
"customId": "ClientsFolder",
"fileId": "320f4ec2-a509-4aa9-b049-87224594802e",
"documentType": "AccountCreation",
"documentLabel": "GENERATED_PDF",
"minioPath": "{{your_bucket}}/2024/2024-01-15/process-id-865759/ClientsFolder/6869_AccountCreation.pdf",
"downloadPath": "internal/files/320f4ec2-a509-4aa9-b049-87224594802e/download",
"noOfPages": 1,
"error": null
}
}
},
"error": null
}
```
* **generatedFiles**: List of generated files.
* **customId**: Client ID.
* **fileId**: The ID of the generated file.
* **documentType**: The name of the document template.
* **documentLabel**: A label or description for the document.
* **minioPath**: The path where the converted file is saved. It represents the location of the file in the storage system, whether it's a MinIO path or an S3 path, depending on the specific storage solution.
* **downloadPath**: The download path for the converted file. It specifies the location from where the file can be downloaded.
* **noOfPages**: The number of pages in the generated file.
* **error**: If there were any errors encountered during the generation process, they would be specified here. In the provided example, the value is null, indicating no errors.
## Displaying the generated document
Upon document generation, you now have the capability to present it using the Document Preview UI element. To facilitate this, let's optimize the existing process by introducing two supplementary nodes:
* **Task node**: This node is designated to generate the document path from the storage solution, specifically tailored for the Document Preview.
* **User task**: In this phase, we seamlessly integrate the Document Preview UI Element. Here, we incorporate a key that contains the download path generated in the preceding node.
For detailed instructions on displaying a generated or uploaded document, refer to the example provided in the following section:
# Getting URLs
In certain scenarios, obtaining URLs pointing to uploaded documents for use in integrations is essential. This process involves adding a custom action to your workflow that requests URLs from the Documents plugin.
## Prerequisites
Before retrieving document URLs, ensure:
1. **Access Permissions**: Ensure that the user account has the necessary access rights.
2. **Kafka Configuration**:
* **Verify Kafka Setup**: Ensure proper configuration and accessibility of the Kafka messaging system.
* **Kafka Topics**: Understand the Kafka topics used for these operations.
3. **Document Types**: Prepare information for updating or deleting files:
* `types`: A list of document types.
## Configuring the getting URLs process
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/doc_plugin_get_urls.png)
To obtain document URLs, follow these steps:
Create a process that will contain the following types of nodes:
* [**Send Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node) - used to send the get URLs request
* [**Receive Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node) - used to receive the get URLs reply
* [**User Task**](../../../../building-blocks/node/user-task-node) - where you will perform the upload action
Start configuring the **User Task** node:
#### Node Config
* **Data stream topics**: Add the topic where the response will be sent; in this example `ai.flowx.updates.document.html.persist.v1` and its key: `uploadedDocument`.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/doc_plugin_get_urls1.png)
#### Actions
We will configure the following node actions:
* Upload File action ("uploadDocument") will have two child actions:
* Send Data to User Interface ("uploadDocumentSendToInterface")
* Business Rule ("uploadDocumentBR")
* Save Data action ("save")
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/upload_document_actions.png)
Configure the parameters for the **Upload Action**:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/upload_action_get_urls.png)
For more details on uploading a document and configuring the file upload child actions, refer to the following sections:
* [**Upload document**](./uploading-a-new-document)
* [**Upload action**](../../../../building-blocks/actions/upload-file-action)
Next, configure the **Send Message Tas (Kafka)** node by adding a **Kafka Send Action** and specifying the `..in` [**Kafka topic**](../../../../../setup-guides/plugins-setup-guide/documents-plugin-setup#kafka-configuration) to send the request.
Identify defined topics in your environment:
* Navigate to **Platform Status > FlowX Components > document-plugin-mngt** and press the eye icon on the right side.
* In the details screen, expand the `KafkaTopicsHealthCheckIndicator` line and then **details → configuration → topic → document → get**. Here will find the in and out topics for getting URLs for documents.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/get_urls_kafka.png)
Fill in the body of the request message for the Kafka Send action to send the get URLs request:
![Request Message](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/doc_plugin_get_urls2.png)
* `types`: A list of document types.
#### Message request example
Example of a message following the custom integration data model:
```json
{
"types": [
"119435",
"119435"
]
}
```
Configure the [**Receive Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node#configuring-a-message-receive-task-node) by adding the `..out` kafka topic on which the response will be sent.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/doc_plugin_get_urls3.png)
## Receiving the reply
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/doc_plugin_get_urls4.png)
The response body should include the following values:
* **success**: A boolean indicating whether the document exists and the URL was generated successfully.
* **fullName**: The full name of the document file, including the directory path.
* **fileName**: The name of the document file without the extension.
* **fileExtension**: The extension of the document file.
* **url**: The full download URL for the document.
#### Message response example
```json
[
{
"success": true,
"fullName": "2024/2024-08-27/process-id-1926248/1234_1926248/7715_1926248.pdf",
"fileName": "2024/2024-08-27/process-id-1926248/1234_1926248/7715_1926248",
"fileExtension": "pdf",
"url": "http://minio:9000/qualitance-dev-paperflow-devmain/2024/2024-08-27/process-id-1926248/1234_1926248/7715_1926248.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=minio%2F20240827%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240827T150257Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&X-Amz-Signature=575333697714249e9deb295359f5ba9365f618f53303bf5583ca30a9b1c45d84"
}
]
```
# Listing stored files
If you are using an S3-compatible cloud storage solution such as MinIO, the stored files are organized into buckets. A bucket serves as a container for objects stored in Amazon S3. The Documents Plugin provides a REST API that allows you to easily view the files stored in the buckets.
To determine the partitioning strategy used for storing generated documents, you can access the following key in the configuration:
`application.file-storage.partition-strategy`
```yaml
application:
defaultLocale: en
supportedLocales: en, ro
#fileStorageType is the configuration that activates one FileContentService implementation. Valid values: minio / fileSystem
file-storage:
type: s3
disk-directory: MS_SVC_DOCUMENT
partition-strategy: NONE
```
The `partition-strategy` property can have two possible values:
* **NONE**: In this case, documents are saved in separate buckets for each process instance, following the previous method.
**PROCESS\_DATE**: Documents are saved in a single bucket with a subfolder structure based on the process date. For example: `bucket/2022/2022-07-04/process-id-xxxx/customer-id/file.pdf`.
## REST API
The Documents Plugin provides the following REST API endpoints for interacting with the stored files:
### List buckets
Check out the [**List buckets API reference**](/4.0/docs/api/list-buckets) for more details.
* This endpoint returns a list of available buckets.
### List objects in a bucket
Check out the [**List objects in a bucket API reference**](/4.0/docs/api/list-objects-in-buckets) for more details.
* This endpoint retrieves a list of objects stored within a specific bucket. Replace `BUCKET_NAME` with the actual name of the desired bucket
### Download file
Check out the [**Download file API reference**](/4.0/docs/api/download-file.mdx) for more details.
* This endpoint allows you to download a file by specifying its path or key.
# Managing templates
The Documents plugin provides the flexibility to define and manage HTML templates for document generation, enabling customization through various parameter types.
Additionally, the platform incorporates a [**What You See Is What You Get (WYSIWYG)**](../../wysiwyg) editor, allowing users to have a real-time, visual representation of the document or content during the editing process. Furthermore, you have the capability to test and review your template by downloading it as a PDF.
A WYSIWYG editor, typically provides two main views:
* **Design View (Visual View)**: In this view, you can see a visual representation of their content as it would appear when rendered in a web browser or other output medium. It resembles the final output closely, allowing you to format text, add images, and apply styles directly within the visual interface. This view aims to provide a real-time preview of the document's appearance.
* **HTML View (Source View)**: In this view, you can see and edit the underlying HTML code that corresponds to the content displayed in the Design View. It shows the raw HTML markup, providing a more detailed and technical representation of the document. You can manually edit the HTML code to make precise adjustments or to implement features not available in the visual interface.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/wysiwyg_example.gif)
Explore the different parameter types and their specifications:
## Configuring HTML templates
In the following example, we will create an example of a template using the HTML View (Source View).
To create a document template, navigate to the **Document Templates** section in the **Designer**, select ”**New document**” from the menu in the top-right corner, name your template, and click **Create**.
Now follow the next steps to design a new template:
1. **Open the WYSIWYG editor**:
Access the WYSIWYG editor within the Document Management Plugin, found in the **FlowX Designer → Plugins → Document templates** section.
* **Language Configuration**: Create a dedicated template for each language specified in your system.
To confirm the installed languages on the platform, go to **FLOWX.AI Designer → Content Management → Languages**.
2. **Design the document header**:
Begin by creating a header section for the document, including details such as the company logo and title.
```html
Offer Document
```
Data specifications (process data):
```json
"data": {
"companyLogo": "INSERT_BASE64_IMAGE",
"offerTitle": "Client Offer"
}
```
3. **Text Parameters for Client Information**:
Include a section for client-specific information using text parameters.
Text parameters enable the inclusion of dynamic text in templates, allowing for personalized content.
```html
Client Information
Client Name:
Client ID:
```
Data specifications:
```json
"data": {
"clientName": "John Doe",
"clientId": "JD123456"
}
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/client_infor.png)
4. **Dynamic table for offer details:**
Create a dynamic table to showcase various details of the offer.
```html
Offer Details
Item
Description
Price
```
Data specifications:
```json
"data": {
"offerItems": [
{ "name": "Product A", "description": "Description A", "price": "$100" },
{ "name": "Product B", "description": "Description B", "price": "$150" },
{ "name": "Product C", "description": "Description C", "price": "$200" }
]
}
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/table_offer.png)
5. **Dynamic sections for certain conditions:**
Dynamic sections allow displaying specific content based on certain conditions. For example, you can display a paragraph only when a certain condition is met.
```html
Dynamic Sections for Certain Conditions
This is displayed if it is a preferred client. They are eligible for special discounts!
This is displayed if the client has specific requests. Please review them carefully.
This is displayed if the client has an active contract with us.
```
Data specifications:
```json
"data": {
"clientName": "John Doe",
"clientId": "JD123456",
"isPreferredClient": false,
"hasSpecialRequest": false,
"isActiveContract": true
}
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/display_conditions.png)
6. **Lists**:
Lists are useful for displaying values from selected items in a checkbox as a bulleted list.
```html
Income source:
```
Data specifications:
```json
{
"data": {
"incomeSource": [
"Income 1",
"Income 2",
"Income 3",
"Income 4"
]
}
}
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/list_html.png)
7. **Include Image for Authorized Signature:**
Embed an image for the authorized signature at the end of the document.
```html
```
Data Specifications:
```json
"data": {
"signature": "INSERT_BASE64_IMAGE"
}
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/authorized_sign.png)
8. **Barcodes**:
Set the `includeBarcode` parameter to true if you want to include a barcode. For information on how to use barcodes and OCR plugin, check the following section:
9. **Checkboxes for Consent**:
Consent checkboxes in HTML templates are commonly used to obtain explicit agreement or permission from users before proceeding with certain actions or processing personal data.
```html
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/checkbox_html.gif)
10. **Data Model**:
In the documents template we have the **Data Model** tab. Here you define parameters, which are dynamic data fields that will be replaced with the values you define in the payload, like first name, or company name.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/data_model_input.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/dataModelTemplate.png)
## Styling
HTML template styling plays a crucial role in enhancing the visual appeal, readability, and user experience of generated documents.
We will apply the following styling to the previously created HTML template using **Source** view of the editor.
```css
```
In the end the template will look like this:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/manage_template_final.png)
## Samples
### Without dynamic data
The final result after generating the template without dynamic data:
Download PDF sample [**here**](../../../assets/HTMLExample.pdf).
### With dynamic data
The final result after generating the template with the following dummy process data:
```json
"data": {
"offerTitle": "Client Offer",
"clientName": "John Doe",
"clientId": "JD123456",
"isPreferredClient": false,
"hasSpecialRequest": false,
"isActiveContract": true,
"offerItems": [
{ "name": "Product A", "description": "Description A", "price": "$100" },
{ "name": "Product B", "description": "Description B", "price": "$150" },
{ "name": "Product C", "description": "Description C", "price": "$200" },
],
"incomeSource": [
"Income 1",
"Income 2",
"Income 3",
"Income 4"
],
}
```
Download a PDF sample [**here**](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/726_ManageHTMLTemplate%20.pdf).
# Splitting documents
You can split a document into multiple parts using the Documents plugin.
This guide provides step-by-step instructions on how to split a document, such as when a user uploads a bulk scanned file that needs to be separated into distinct files.
## Prerequisites
1. **Access Permissions**: Ensure that you have the necessary permissions to use the Documents Plugin. The user account used for these operations should have the required access rights.
2. **Kafka Configuration**: Verify that the Kafka messaging system is properly configured and accessible. The Documents Plugin relies on Kafka for communication between nodes.
* **Kafka Topics**: Familiarize yourself with the Kafka topics used for these operations (later in this section)
3. Before initiating the splitting process, ensure you have the unique ID of the file in the storage solution. This ensures that the splitting is performed on an already uploaded file.
Ensure that the uploaded document contains more than one file.
You have two options to obtain the file ID:
* Extract the file ID from a [**Response Message**](./uploading-a-new-document#response-message-example-2) of an upload file request. For more details, refer to the [**upload process documentation**](./uploading-a-new-document).
* Extract the file ID from a [**Response Message**](./generating-from-html-templates#receiving-the-document-generation-reply) of a generate from template request. For more details, refer to the [**document generation reply documentation**](./generating-from-html-templates).
In the following example, we will use the `fileId` generated for a document with multiple files using [**Uploading a New Document**](./uploading-a-new-document) scenario.
```json
{
"customId": "119407",
"fileId": "446c69fb-32d2-44ba-a0b2-02dbb55e7eea",
"documentType": "BULK",
"documentLabel": null,
"minioPath": "flowx-dev-process-id-119407/119407/465_BULK.pdf",
"downloadPath": "internal/files/446c69fb-32d2-44ba-a0b2-02dbb55e7eea/download",
"noOfPages": null,
"error": null
}
```
## Configuring the splitting process
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/split_document.png)
To create a process that splits a document into multiple parts, follow these steps:
1. Create a process that includes a [**Send Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node#configuring-a-message-send-task-node) node and a [**Receive Message Task (Kafka)**](../../../../building-blocks/node/message-send-received-task-node#configuring-a-message-receive-task-node) node:
* Use the **Send Message Task** node to send the splitting request.
* Use the **Receive Message Task** node to receive the splitting reply.
2. Configure the **first node (Send Message Task)** by adding a **Kafka Send Action**.
3. Specify the [**Kafka topic**](../../../../../setup-guides/plugins-setup-guide/documents-plugin-setup#kafka-configuration) where you want to send the splitting request.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/kafka_split_topic.png)
To identify your defined topics in your current environment, follow the next steps:
1. From the FLOWX.AI main screen, navigate to the Platform Status menu at the bottom of the left sidebar.
2. In the FLOWX Components list, scroll to the document-plugin-mngt line and press the eye icon on the right side.
3. In the details screen, expand the `KafkaTopicsHealthCheckIndicator` line and then **details → configuration → topic → document → split**. Here you will find the in and out topics for splitting documents.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/kafka_topics_split.png)
4. Fill in the body of the message request.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/split_kafka_action.png)
#### Message request example
```json
{
"parts": [
{
"documentType": "BULK",
"customId": "119407",
"pagesNo": [
1,
2
],
"shouldOverride": true
}
],
"fileId": "446c69fb-32d2-44ba-a0b2-02dbb55e7eea"
}
```
* **fileId**: The file ID of the document that will be split
* **parts**: A list containing information about the expected document parts
* **documentType**: The document type.
* **customId**: The unique identifier for your document (it could be for example the ID of a client)
* **shouldOverride**: A boolean value (true or false) indicating whether to override an existing document if one with the same name already exists
* **pagesNo**: The pages that you want to separate from the document
5. Configure the **second node (Receive Message Task)** by adding a Data stream topic:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/split_response_kafka.png)
The response will be sent to this `..out` Kafka topic.
## Receiving the reply
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/split_response.png)
The following values are expected in the reply body:
* **docs**: A list of documents.
* **customId**: The unique identifier for your document (matching the name of the folder in the storage solution where the document is uploaded).
* **fileId**: The ID of the file.
* **documentType**: The document type.
* **minioPath**: The storage path for the document.
* **downloadPath**: The download path for the document.
* **noOfPages**: The number of pages in the document.
* **error**: Any error message in case of an error during the splitting process.
Here's an example of the response JSON:
#### Message response example
```json
{
"docs": [
{
"customId": "119407",
"fileId": "c4e6f0b0-b70a-4141-993b-d304f38ec8e2",
"documentType": "BULK",
"documentLabel": null,
"minioPath": "flowx-dev-process-id-119408/119407/466_BULK.pdf",
"downloadPath": "internal/files/c4e6f0b0-b70a-4141-993b-d304f38ec8e2/download",
"noOfPages": 2,
"error": null
}
],
"error": null
}
```
The split document is now available in the storage solution and can be downloaded:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/2024-01-30%2013.39.35.gif)
# Creating and uploading a new document
A comprehensive guide to integrating document creation from templates, managing uploads, and configuring workflows for document processing.
User task nodes provide a flexible framework to defining and configuring UI templates and actions for specific template config nodes, such as an upload file button.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/docs_upload_proc_4.5.png)
## Prerequisites
Before you begin, ensure the following prerequisites are met:
* **Access Permissions**: Ensure that you have the necessary permissions to use the Documents Plugin. The user account used for these operations should have the required access rights.
* **Kafka Configuration**: Verify that the Kafka messaging system is properly configured and accessible. The Documents Plugin relies on Kafka for communication between nodes.
* **Kafka Topics**: Familiarize yourself with the Kafka topics used for these operations (later in this section).
To upload a document using this process, follow the steps outlined below.
## Step-by-step guide: uploading an previewing a document
In the [previous section](./generating-from-html-templates), you learned how to generate documents from HTML templates. This section focuses on creating a process where users can generate a document, review and sign it, and subsequently upload it.
### Process overview
This process involves several key nodes, each performing specific tasks to ensure smooth document creation, preview, and upload:
* **Start and End Nodes**: These nodes mark the beginning and end of the process, respectively.
* **User Task Node**: Collects user input necessary for document generation.
* **Send and Receive Message Tasks (Kafka)**: Handle communication with Kafka for generating the document and retrieving it after processing.
* **Service Task Node**: Appends the path of the generated document to the process, enabling further actions.
* **User Task Nodes**: Facilitate the preview of the generated document and manage the upload of the signed document.
## Configuring the process
Follow the steps outlined in [**Generating from HTML templates**](./generating-from-html-templates) configure the document generation part of the process.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/doc_plugin_upload_prev_ex.png)
If you only need to upload a new file without generating it from templates, skip the template generation steps.
After configuration, your request and response messages should resemble the examples below.
#### Request message example
This JSON structure represents a Kafka message sent through the `..in` topic to initiate a request in the Process Engine. It includes information for generating an "AccountCreation" document with a custom ID "119237" in English. Data specifies client details extracted dynamically from user input (first name, last name, age, country) and company information (name, registration date).
This an example of a message that follows the custom integration data model.
```json
{
"documentList": [
{
"customId": "119246", //this will match the name of the folder in the storage solution
"templateName": "AccountCreation",
"language": "en",
"data": {
"application": {
"client": {
"firstName": "John",
"lastName": "Doe",
"age": "33",
"country": "AU"
},
"company": {
"name": "ACME",
"registrationDate": "24.01.2024"
}
}
},
"includeBarcode": false
}
]
}
```
#### Response message example
This JSON structure represents the response received on the `..out` Kafka topic, where the Process Engine expects a reply. It contains details about the generated PDF file corresponding to the custom ID "119237" and the "AccountCreation" template. The response provides file-related information such as file ID, document type, document label, storage path, download path, number of pages, and any potential errors (null if none). The paths provided in the response facilitate access and download of the generated PDF file.
```json
{
"generatedFiles": {
"119246": {
"AccountCreation": {
"customId": "119246",
"fileId": "f705ae5b-f301-4700-b594-a63b50df6854",
"documentType": "AccountCreation",
"documentLabel": "GENERATED_PDF",
"minioPath": "flowx-dev-process-id-119246/119246/457_AccountCreation.pdf", // path to the document in the storage solution
"downloadPath": "internal/files/f705ae5b-f301-4700-b594-a63b50df6854/download", //link for download
"noOfPages": 1,
"error": null
}
}
},
"error": null
}
```
Configure the **preview** part of the process.
![Preview and Upload](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/preview_path.png)
#### Service task node
We will configure the service task node to construct the file path for the generated document.
Configure a business rule to construct a file path for the generated document. Ensure the base admin path is specified.
Ensuring the base admin path is specified is crucial, as it grants the required administrative rights to access the endpoint responsible for document generation.
#### Actions
**Action Edit**
* **Action Type**: Set to **Business Rule**
* **Trigger Type**: Choose **Automatic** because is not a user-triggered action
* **Required Type**: Set as **Mandatory**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/create_document_path.png)
**Parameters**
* **Language**: We will use **MVEL** for this example
* **Body Message**: Fill in the body message request
```js
adminPath = "https://admin-main.playground.flowxai.dev/document/";
processInstanceId = input.?generatedDocuments.?generatedFiles.keySet().toArray()[0];
downloadPath = input.?generatedDocuments.?generatedFiles.?get(processInstanceId).Company_Client_Document.downloadPath;
if(downloadPath != null){
output.put("generatedDocuments", {
"filePath" : adminPath + downloadPath
});
}
```
* **adminPath**: Base URL for the admin path.
```java
adminPath = "https://admin-main.playground.flowxai.dev/document/";
```
* **processInstanceId**: Extracts the process instance ID from the input. Assumes an input structure with a generatedDocuments property containing a generatedFiles property. Retrieves the keys, converts them to an array, and selects the first element.
```java
processInstanceId = input.?generatedDocuments.?generatedFiles.keySet().toArray()[0];
```
* **downloadPath**: Retrieves the downloadPath property using the obtained processInstanceId.
```java
downloadPath = input.?generatedDocuments.?generatedFiles.?get(processInstanceId).Company_Client_Document.downloadPath;`
```
* **if condition**: Checks if downloadPath is not null and constructs a new object in the output map.
```java
if(downloadPath != null){
output.put("generatedDocuments", {
"filePath" : adminPath + downloadPath
});
}
```
### User Task
Now we will configure the user task to preview the generated document.
#### Actions
Configure the **Actions edit** section:
* **Action Type**: Set to **Save Data**.
* **Trigger Type**: Choose **Manual** to allow user-triggered action.
* **Required Type**: Set as **Mandatory**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/document_preview_action.png)
Let's see what we have until now.
The screen where you can fill in the client details:
![Client Details](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/upload_fill_data.gif)
The screen where you can preview and download the generated document:
![Preview Document](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/preview_document_vis.gif)
Configure the user task where we will upload the signed document genereated from the previous document template.
#### Node Config
* **Swimlane**: Choose a swimlane (if multiple) to restrict access to specific user roles.
* **Stage**: Assign a stage to the node.
* **Data stream topics**: Add the topic where the response will be sent; in this example `ai.flowx.updates.document.html.persist.v1` and its key: `uploadedDocument`.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/upload_document_node_config.png)
#### Actions
Configure the following node actions:
* Upload File action with two child actions:
* Business Rule
* Send Data to User Interface
* Save Data action
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/upload_document_actions.png)
#### Upload file action
This is a standard predefined FlowX Node Action for uploading files. This is done through Kafka and by using `persist` topics.
#### Action edit
* **Action Type**: Set to **Upload File**.
* **Trigger Type**: Choose **Manual** to allow user-triggered action.
* **Required Type**: Set it as **Optional**.
* **Repeatable**: Check this option if the action can be triggered multiple times.
#### Parameters
* **Topics**: Kafka topic where the file will be posted, in this example `ai.flowx.in.document.persist.v1`.
To identify your defined topics in your current environment, follow the next steps:
* From the FLOWX.AI main screen, navigate to the **Platform Status** menu at the bottom of the left sidebar.
* In the FLOWX Components list, scroll to the **document-plugin-mngt** line and press the eye icon on the right side.
* In the details screen, expand the `KafkaTopicsHealthCheckIndicator` line and then **details → configuration → topic → document → persist**. Here will find the in and out topics for persisting (uploading documents).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/kafka_topics_persist.png)
* **Document Type**: Metadata for the document plugin, in this example `BULK`.
* **Folder**: Configure a value by which the file will be identified, in this example it will be the`${processInstanceId}` (it will be replaced at runtime with a generated process instance id).
* **Advanced configuration (Show headers)**: This represents a JSON value that will be sent on the headers of the Kafka message, in this example:
```json
{"processInstanceId": ${processInstanceId}, "destinationId": "upload_document", "callbacksForAction": "uploadDocument"}`
```
`callbacksForAction` - the value of this key is a string that specifies a callback action associated with the "upload\_document" destination ID (node). This is part of an event-driven system (Kafka send action) where this callback will be called once the "upload\_document" action is completed.
#### Data to send
* **Keys**: Used when data is sent from the frontend via an action for data validation.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/upload_config_a.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/upload_config_b.png)
Now, configure the child actions of Upload File Action.
#### Business rule
This is necessary to create the path to display the uploaded document.
**Action Edit**
* **Order**: Set to **1** so it will be processed before the second child action.
* **Action Type**: Set to **Upload File**.
* **Trigger Type**: Choose **Automatic**, it does not need user intervention.
* **Required Type**: Set as **Mandatory**.
* **Repeatable**: Check this option if the action can be triggered multiple times.
**Parameters**
* **Language**: In this example we will use **MVEL**.
* **Body Message**: Fill in the body of the message request by applying a logic similar to the one utilized in configuring the "preview\_document" node. Establish a path that will be later employed to showcase the uploaded document within a preview UI component.
```js
adminPath = "https://admin-main.playground.flowxai.dev/document/";
downloadPath = input.?uploadedDocument.?downloadPath;
if(downloadPath != null){
output.put("uploadedDocument", {
"filePath" : adminPath + downloadPath
});
}
```
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/upload_business_rule_a.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/upload_business_rule_b.png)
#### Send Data to User Interface
This is necessary to send the previously created information to the frontend.
**Action Edit**
* **Order**: Set to **2** so it will be processed after the previously created Business Rule.
* **Action Type**: Set to **Send data to user interface**.
* **Trigger Type**: Choose **Automatic**, it does not need user intervention.
* **Required Type**: Set as **Mandatory**.
* **Repeatable**: Check this option if the action can be triggered multiple times.
**Parameters**
* **Message Type**: Set to **Default**.
* **Body Message**: Populate the body of the message request; this object will be utilized to bind it to the document preview UI element.
```json
{
"uploadedDocument": {
"filePath": "${uploadedDocument.filePath}"
}
}
```
* **Target Process**: Used to specify to what running process instance should this message be sent - set to **Active Process**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/send_to_UI_a.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/send_to_UI_b.png)
#### Save Data
Configure the last node action to save all the data.
**Action edit**
* **Order**: Set to **3**.
* **Action Type**: Set to **Save Data**.
* **Trigger Type**: Choose **Manual** to allow user-triggered action.
* **Required Type**: Set as **Mandatory**.
#### Request message example
To initiate the document processing, a Kafka request with the following JSON payload will be sent through `..in` topic:
This an example of a message that follows the custom integration data model.
```json
{
"tempFileId": "05081172-1f95-4ece-b2dd-1718936710f7", //a unique identifier for the temporary file
"customId": "119246", //a custom identifier associated with the document
"documentType": "BULK" //the type of the document
}
```
#### Response message example
Upon successful processing, you will receive a JSON response on the `..out` topic with details about the processed document:
```json
{
"customId": "119246",
"fileId": "96975e03-7fba-4a03-99b0-3b30c449dfe7",
"documentType": "BULK",
"documentLabel": null,
"minioPath": "flowx-dev-process-id-119246/119246/458_BULK.pdf",
"downloadPath": "internal/files/96975e03-7fba-4a03-99b0-3b30c449dfe7/download",
"noOfPages": null,
"error": null
}
```
Now the screen is configured for uploading the signed document:
![Upload File](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/upload_document_signed.gif)
## Receiving the reply
The response, containing information about the generated and uploaded documents as mentioned earlier, is sent to the output Kafka topic defined in the Kafka Receive Event Node. The response includes details such as file IDs, document types, and storage paths.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/final_response.png)
The reply body is expected to contain the following values:
* **customId**: A custom identifier associated with the document.
* **fileId**: The ID of the file.
* **documentType**: The document type.
* **minioPath**: The path where the uploaded file is saved. It represents the location of the file in the storage system, whether it's a MinIO path or an S3 path, depending on the specific storage solution.
* **downloadPath**: The download path for the uploaded file. It specifies the location from where the file can be downloaded.
* **noOfPages**: The number of pages in the document (if applicable).
* **filePath**: The path to the file that we built in our example so we can display the document.
# Forwarding notifications
If the Notification service is not directly connected to an SMTP / SMS server and you want to use an external system for sending the notifications, you can use the notification plugin just to forward the notifications to your custom implementation.
### Check needed Kafka topics
Yu will need the name of the kafka topics defined at the following environment variables:
* `KAFKA_TOPIC_NOTIFICATION_INTERNAL_IN` - topic used to trigger the request to send a notification
* `KAFKA_TOPIC_NOTIFICATION_EXTERNAL_OUT` - the notification will be forwarded on this topic to be handled by an external system
* `KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT` - topic used for sending replies after sending the notification
You can check the defined topics by going to **FlowX Designer > Platform Status > notification-plugin-mngt > kafkaTopicsHealthCheckIndicator > details > configuration > topic > notification**.
### Example: send a notification from a business flow
Let's pick a simple use case. Imagine we need to send a new welcome letter when we onboard a new customer. You must follow the next steps:
1. Configure the [template](./managing-notification-templates) that you want to use for the welcome email, use the [WYSIWYG Editor](../../wysiwyg)
Make sure that the **Forward on Kafka** checkbox is ticked, so the notification will be forwarded to an external adapter.
2. Configure the data model for the template.
3. To configure a document template, first, you need to define some information stored in the [Body](../../wysiwyg#notification-templates):
* **Type** - MAIL (for email notifications)
* ❗️**Forward on Kafka** - if this box is checked, the notification is not being sent directly by the plugin to the destination, but forwarded to another adapter
* **Language** - choose the language for your notification template
* **Subject** - enter a subject
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/notification_email.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/data_model_notif.png)
4. Use the FlowX Designer to create a process definition.
5. Add a [**Send Message Task**](../../../../building-blocks/node/message-send-received-task-node#configuring-a-message-send-task-node) and a [**Receive Message Task**](../../../../building-blocks/node/message-send-received-task-node#configuring-a-message-receive-task-node) (one to send the request, one to receive the reply).
6. Check if the needed topic (defined at the following environment variable) is configured correctly: `KAFKA_TOPIC_NOTIFICATION_INTERNAL_IN`.
7. Add the proper configuration to the action, the Kafka topic, and the body message.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/notif_params_send.png)
**Forward on Kafka** option will forward the notification to an external adapter, make sure the needed Kafka topic for forwarding is defined/overwritten using the following environment variable: `KAFKA_TOPIC_EXTERNAL_OUT`.
7. Run the process and look for the response (you can view it via the **Audit log**) or by checking the responses on the Kafka topic
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/notif_send_resp.png)
Response example at `KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT`:
```json
{
"templateName": "welcomeLetter",
"receivers": [
"john.doe@mail.com"
],
"channel": "MAIL",
"language": "en"
}
```
# Managing notification templates
You can create and manage notification templates using FlowX.AI Designer by accessing the dedicated section.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/notif_overview.png)
### Configuring a template
To configure a document template, first, you need to select some information stored in the **Body**:
1. **Type** - could be either MAIL or SMS notifications
2. [**Forward on Kafka**](./forwarding-notifications-to-an-external-system) - if this checkbox is ticked, the notification is not being sent directly by the plugin to the destination, but forwarded to another adapter (this is mandatory for SMS notifications templates, as they require an external adapter)
3. **Language** - choose the language for your notification template
4. **Subject** - enter a subject
![Notification template](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/notifications_template.png)
#### Editing the content
You can edit the content of a notification template by using the [WYSIWYG](../../wysiwyg) editor embedded in the body of the notification templates body.
### Configuring the data model
Using the **Data model** tab, you can define key pair values (parameters) that will be displayed and reused in the editor. Multiple parameters can be added:
* STRING
* NUMBER
* BOOLEAN
* OBJECT
* ARRAY (which has an additional `item` field)
![Data model](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/notifications_data_model.png)
After you defined some parameters in the **Data Model** tab, you can type "**#**" in the editor to trigger a dropdown where you can choose which one you want to use/reuse.
![Data model](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/data_model1.gif)
[WYSIWYG Editor](../../wysiwyg)
### Testing the template
You can use the test function to ensure that your template configuration is working as it should before publishing it.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/testing_notif_template.gif)
In the example above, some keys (marked as mandatory) were not used in the template, letting you know that you've missed some important information. After you enter all the mandatory keys, the notification test will go through:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/notifications_email.png)
### Other actions
When opening the contextual menu (accessible by clicking on the breadcrumbs button), you have multiple actions to work with the notifications templates:
* Publish template - publish a template (it will be then displayed in the **Published** tab), you can also clone published templates
* Export template - export a template (JSON format)
* Show history - (version history and last edited)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/notif_export_etc.png)
# Notifications plugin
Notifications plugin can be easily added to your custom FlowX.AI deployment. The plugin will **enhance the core platform capabilities with functionality specific to sending various notifications**.
You have the possibility to send various types of notifications:
* Email notifications
* SMS templates to mobile devices
To use the plugin to send notifications, via SMS channel, a third party provider for SMS communication is needed, for example, [Twilio](https://www.twilio.com/).
* Forward custom notifications to external outgoing services
* Generate and validate OTP passwords for user identity verification
You can also use the plugin to track what notifications have been sent and to whom.
Let's go through the steps needed to deploy and set up the plugin:
## Using Notifications plugin
After deploying the notifications plugin in your infrastructure, you can start sending notifications by configuring related actions in your **process definitions**.
Before adding the corresponding **actions** in your process definition, you will need to follow a few steps:
* make sure all prerequisites are prepared, for example, the [notification templates](./managing-notification-templates) you want to use
* the database is configured properly
* for each **Kafka** event type, you will need two Kafka topics:
* one for the request sent from the FlowX Engine to the Notifications plugin
* one for the corresponding reply
**DEVELOPER**
The topic names configured for the plugin should match the ones used when configuring the engine and when adding plugin-related process actions:
* FlowX Engine is listening for messages on topics with names of a certain pattern, make sure to use an outgoing topic name that matches the pattern configured in the Engine
More details: [here](../../../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka)
* to make a request to the plugin, the process definition needs to have an action of type `Kafka send` that has an action parameter with key `topicName` and the needed topic name as a value
* to receive a reply from the plugin, the process definition needs to have a receiving node with a node value with key `topicName` and the topic name as the value
After all the setup is in place, you can start adding custom actions to the processes.
Let's go through a few examples. These examples cover both the configuration part, and the integration with the engine for all the use cases.
# Generate OTP
There are some cases when you will need to generate an OTP (One Time Password) from a business flow, for example when validating an email account.
The notifications plugin handles both the actual OTP code generation and sending the code to the user using a defined [notification template](../managing-notification-templates).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/otp_archi.png)
## Define needed Kafka topics
**DEVELOPER**: Kafka topic names can be set/found by using the following environment variables in your Notifications plugin deployment:
* `KAFKA_TOPIC_OTP_GENERATE_IN`
* `KAFKA_TOPIC_OTP_GENERATE_OUT` - after the OTP is generated and sent to the user, this is the topic used to send the response back to the Engine.
The Engine is listening for messages on topics with names of a certain pattern, make sure to use an outgoing topic name that matches the pattern configured in the Engine.
## Request to generate an OTP
Values expected in the request body:
* **templateName**: the name of the notification template that is used (created using the [WYSIWYG](../../../wysiwyg) editor)
* **channe**l: notification channel (SMS / MAIL)
* **recipient**: notification receiver: email / phone number
* **notification template content parameters (for example, clientId)**: parameters that should be replaced in the [notification template](../managing-notification-templates)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/notifications_params.png)
## Response from generate OTP
Values expected in the reply body:
* **processInstanceId** = process instance ID
* **clientId** = the client id (in this case the SSN number of the client)
* **channel** = notification channel used
* **otpSent** = confirmation if the notification was sent: true or false
* **error** = error description, if any
**Example:**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/otp_response.png)
## Example: generate an OTP from a business flow
It is important to identify what is the business identifier that you are going to use to validate that OTP, it can be, for example, a user identification number.
1. Configure the templates that you want to use (for example, an SMS template).
2. Make sure the Kafka communication through (`KAFKA_TOPIC_OTP_GENERATE_IN` topic - send the otp request) and the topic used to receive the response (`KAFKA_TOPIC_OTP_GENERATE_OUT`) is defined properly.
You can check the defined topics by going to **FlowX Designer > Platform Status > notification-plugin-mngt > kafkaTopicsHealthCheckIndicator > details > configuration > topic > otp**.
3. Use the FlowX Designer to add a new **Kafka send event** to the correct node in the process definition.
4. Add the proper configuration to the action, the Kafka topic, and configure the body message.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/kafka_config_otp.png)
5. Add a node to the process definition (for the Kafka receive event).
6. Configure on what key you want to receive the response on the process instance params.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/otp_config1.png)
# Handling OTP
The notifications plugin can also be used for handling the one time password (OTP) generation and validation flow.
The flow is made of two steps, the OTP generation and the OTP validation.
### OTP Configuration
The desired character size and expiration time of the generated one-time-passwords can also be configured using the following environment variables:
* `FLOWX_OTP_LENGTH`
* `FLOWX_OTP_EXPIRE_TIME_IN_SECONDS`
Let's go through the examples for both steps:
# Validate OTP
## Define needed Kafka topics
Kafka topic names can be set by using environment variables:
* `KAFKA_TOPIC_OTP_VALIDATE_IN` - the event sent on this topic (with an OTP and an identifier) will check if the OTP is valid
* `KAFKA_TOPIC_OTP_VALIDATE_OUT` - the response for this request will validate an OTP, the reply is sent back to the Engine on this topic
You can check the defined topics by going to **FlowX Designer > Platform Status > notification-plugin-mngt > kafkaTopicsHealthCheckIndicator > details > configuration > topic > otp**.
The Engine is listening for messages on topics with names of a certain pattern, make sure to use an outgoing topic name that matches the pattern configured in the Engine.
## Request to validate an OTP
Values expected in the request body:
* **processInstanceId** = process instance ID
* **client id** = the user unique ID in the system
* **channel** = notification channel: SMS/MAIL
* **otp** = OTP code that you received, used to compare with the one that was sent from the system
Example:
```json
{
"processInstanceId": 12345,
"clientId": "1871201460101",
"channel": "MAIL",
"otp": "1111"
}
```
### Reply from validate OTP
Values expected in the reply body:
* **client id** = the user unique id in the system
* **channel** = notification channel used
* **otpValid** = confirmation if the provided OTP code was the same as the one sent from the system
Example:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/otp_validate_audit.png)
## Example: validate an OTP from a business flow
Similar to the generation of the OTP you can validate the OTP that was generated for an identifier.
1. Check that the needed topics are configured correctly: (`KAFKA_TOPIC_OTP_VALIDATE_IN` and `KAFKA_TOPIC_OTP_VALIDATE_OUT`)
2. Add the actions for sending the request to validate the OTP on the node that contains the 'Generate OTP' actions
3. Add the proper configuration to the action, the Kafka topic and configure the body message.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/validate_otp_temp.png)
4. Add a node to the process definition (for the [Receive Message Task](../../../../../building-blocks/node/message-send-received-task-node#receive-message-task))
5. Configure on what key you want to receive the response on the process instance parameters
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/validate_otp3.png)
# Sending a notification
The plugin can be used for sending many kinds of notifications such as emails or SMS notifications. It can be easily integrated in one of your business processes.
## Configuring the process
To configure a business process that sends notifications you must follow the next steps:
* use **FlowX Designer** to create/edit a [notification template](./managing-notification-templates)
* use **Process Designer** to add a [**Send Message Task**](/4.0/docs/building-blocks/node/message-send-received-task-node) and a [**Receive Message Task**](../../../../building-blocks/node/message-send-received-task-node#receive-message-task)
* configure the needed node [actions](../../../../building-blocks/actions/actions)
* configure the request body
**DEVELOPER**: Make sure the needed [Kafka topics](../../../../../setup-guides/plugins-setup-guide/notifications-plugin-setup#kafka-configuration) are configured properly.
Kafka topic names can be set by using the following environment variables:
* `KAFKA_TOPIC_NOTIFICATION_INTERNAL_IN` - topic used to trigger the request to send a notification
* `KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT` - topic used for sending replies after sending the notification
The following values are expected in the request body:
| Key | Definition | |
| :----------: | :-----------------------------------------------------------------------------------------: | :-------: |
| language | The language that should be used | Mandatory |
| templateName | The name of the notification template that is used | Mandatory |
| channel | Notification channel: SMS/MAIL | Mandatory |
| receivers | Notification receivers: email/phone number | Mandatory |
| senderEmail | Notification sender email | Optional |
| senderName | Notification sender name | Optional |
| attachments | Attachments that are sent with the notification template (only used for MAIL notifications) | Optional |
Check the detailed [example](#example-send-a-notification-from-a-business-flow) below.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/notification_archi_send.png)
## Example: send a notification from a business flow
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/send_a_notification_proc.png)
Let's pick a simple use-case, say we need to send a new welcome letter when we onboard a new customer. The steps are the following:
1. Configure the template that you want to use for the welcome email, see the previous section, [Managing notification templates](./managing-notification-templates) for more information.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/send_a_notif_from_business_flow.gif)
2. Use the FlowX.AI Designer to add a [**Send Message Task**](../../../../building-blocks/node/message-send-received-task-node#send-message-task) and a [**Receive Message Task**](../../../../building-blocks/node/message-send-received-task-node#receive-message-task).
3. On the **Send Message Task** add a proper configuration to the action, the Kafka topic and request body message to be sent:
* **Topics** - `KAFKA_TOPIC_NOTIFICATION_INTERNAL_IN` - (in our example, `flowx-notifications-qa`)
You can check the defined topics by going to **FlowX Designer > Platform Status > notification-plugin-mngt > kafkaTopicsHealthCheckIndicator > details > configuration > topic > notifications**.
* **Message** (expected parameters):
* templateName
* channel
* language
* receivers
* **Headers** - it is always `{"processInstanceId": ${processInstanceId}}`
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/notif_params_send.png)
4. On the **Receive Message Task** add the needed topic to receive the kafka response - `KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT` - (in our example, `ai.flowx.updates.qa.notification.request.v1`).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/generate_notif_receive.png)
5. Run the process and look for the response (you can view it via the **Audit log**) or checking the responses on the Kafka topic defined at `KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT` variable.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/notif_send_resp.png)
Response example at `KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT`:
```json
{
"identifier": null,
"templateName": "welcomeLetter",
"language": "en",
"error": null
}
```
# Sending an email
To use the notification plugin for sending emails with attachments, you must define the same topic configuration as for sending regular notifications. A notification template must be created, and the corresponding Kafka topics must be defined.
Check first the [Send a notification](./sending-a-notification) section.
## **Defining process actions**
### Example: send an email notification with attached files from a business flow
Let's pick a simple use-case. Imagine we need to send a copy of a contract signed by a new customer. Before setting the action for the notification, another action must be defined, so the first one will save the new contract using the documents plugin.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/send_email_notif_attach.jpeg)
See the example from [**Generating a document from template**](../documents-plugin/generating-from-html-templates) section
The steps for sending the notification are the following:
Configure the template that you want to use for the email, see the [Managing notification templates](./managing-notification-templates) section for more information.
Check that the needed topics are defined by going to **`FlowX Designer > Platform Status > notification-plugin-mngt > kafkaTopicsHealthCheckIndicator > details > configuration > topic > notifications`**.
Use the **FlowX Designer** to add a new [Send Message Task (Kafka)](../../../../building-blocks/node/message-send-received-task-node#send-message-task) action to the correct node in the process definition.
Add the proper configuration to the action, the Kafka topic and message to be sent.
The message to be sent to Kafka will look something like:
```json
{
"templateName" : "contractCopy",
"identifier" : "text",
"language": "en",
"receivers" : [ "someone@somewhere.com" ],
"contentParams" : {
"clientId" : "clientId",
"firstName" : "first",
"lastName" : "last"
},
"attachments" : [ {
"filename" : "contract",
"path" : "MINIO_BUCKET_PATH/contract.pdf"
} ]
}
```
# OCR plugin
The OCR (Optical Character Recognition) plugin is a powerful tool that enables you to read barcodes and extract handwritten signatures from .pdf documents with ease.
Before using the OCR service for reading barcodes and extracting signatures, please note the following requirements:
* All \*.pdf documents that are sent to the OCR service for reading barcodes and extracting handwritten signatures should be scanned at a minimum resolution of 200DPI (approximately 1654x2339 px for A4 pages)
* Barcode is searched on the top 15% of each image (scanned page)
* Signatures are detected on boxes with a border: 4px black solid
* Only two signatures per image (scanned page) are detected.
* All \*.pdf documents should be scanned at a minimum resolution of 200DPI (approximately 1654x2339 px for A4 pages).
* The barcode is searched in the top 15% of each scanned page.
* Signatures are detected within boxes with a 4px black solid border.
* The plugin detects up to two signatures per scanned page.
* Only two signatures per image (scanned page) are detected.
The plugin supports **1D Code 128** barcodes. For more information about this barcode type, please refer to the documentation [here](https://graphicore.github.io/librebarcode/documentation/code128.html).
## Using the OCR plugin
You can utilize the OCR plugin to process generic document templates by either using a specific flow on FLOWX.AI (HTML template) or any other document editor.
Using a specific flow on FlowX.AI offers several advantages:
* Centralized management of templates and flows within a single application.
* Access to template history and version control.
### Use case
1. Prepare and print generic document templates.
2. End-users complete, sign, and scan the documents.
3. Upload the scanned documents to the flow.
4. FlowX validates the template (barcode) and the signatures.
### Scenario for FlowX.AI generated documents
1. Utilize the [**Documents plugin**](./documents-plugin/documents-plugin-overview) to create a [**document template**](./documents-plugin/generating-from-html-templates).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/ocr_doc_template.gif)
2. Create a process and add a [**Kafka Send Action**](../../../building-blocks/node/message-send-received-task-node#configuring-a-message-send-task-node) to a [**Send Message Task**](../../../building-blocks/node/message-send-received-task-node#send-message-task) node. Here you specify the [**kafka topic**](../../../platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-kafka-concepts#topics) (address) where the template will be generated.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/ocr_kafka_send.png)
The Kafka topic for generating the template must match the topic defined in the **`KAFKA_TOPIC_DOCUMENT_GENERATE_HTML_IN`** variable. **DEVELOPER**: Refer to the [**Kafka configuration guide**](../../../../setup-guides/flowx-engine-setup-guide/engine-setup#configuring-kafka) for more details. For additional information, please see the [**Documents plugin setup guide**](../../../../setup-guides/plugins-setup-guide/documents-plugin-setup).
3. Fill in the **Message**. The request body should include the following values:
* **documentList** - a list of documents to be generated, including properties such as name and values to be replaced in the document templates
* **customId** - client ID
* **templateName** - the name of the template to be used
* **language**
* **includeBarcode** - true/false
* **data** - a map containing the values that should replace the placeholders in the document template, the keys used in the map should match those defined in the HTML template
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/ocr_message_body.png)
The [**`data` parameters**](../wysiwyg) must be defined in the document template beforehand. For more information, check the [**WYSIWYG editor**](../wysiwyg) section.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/ocr_data_model.png)
4. Add a **barcode**.
* to include a **default barcode**, add the following parameter to the message body: `includeBarCode: true`.
* to include a **custom barcode**, set `includeBarCode: false` and provide the desired data in the `data` field
5. Add a [**Receive Message Task**](../../../building-blocks/node/message-send-received-task-node#receive-message-task) node and specify the topic where you want to receive the response.
Ensure that the topic matches the one defined in the **`KAFKA_TOPIC_DOCUMENT_GENERATE_HTML_OUT`** variable.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/ocr_receive_response.png)
6. Add a [**User Task**](../../../building-blocks/node/user-task-node) node and configure an [**Upload file**](../../../building-blocks/actions/upload-file-action) action to send the file (defined by the **`KAFKA_TOPIC_DOCUMENT_PERSIST_IN`** variable) to the storage solution (for example, S3).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/ocr_upload_file.png)
7. Next, the response will be sent back to the kafka topic defined by **`KAFKA_TOPIC_DOCUMENT_PERSIST_OUT`** environment variable through a callback action/subprocess.
8. Next, send the response to the OCR Kafka topic defined at **`KAFKA_TOPIC_OCR_IN`** variable (representing the path to the S3 file)
9. Display the result of the OCR validation on the kafka topic defined at **`KAFKA_TOPIC_OCR_OUT`**.
### Setup guide
**DEVELOPER**: Refer to the OCR plugin setup guide for detailed instructions on setting up the OCR plugin.
# Authorization & access roles
## IAM solution
Superset is using be default flask-openid, as implemented in flask-security.
Superset can be integrated Keycloak, an open-source identity and access management solution. This integration enables users to manage authentication and authorization for their Superset dashboards.
### Prerequisites
* Keycloak server
* Keycloak Realm
* Keycloak Client & broker configured with OIDC protocols
* client\_secret.json
* admin username & password of postgres instance
* Superset Database created in postgresql
* optionally Cert-manager if you want to have SSL certificates on hostnames.
# Reporting plugin
The FlowX.AI Reporting plugin helps you build and bootstrap custom reports using data from your BPMN processes. Moreover, it supports technical reports based on process instance data. Integrated with the FlowX.AI Engine, this plugin transforms raw data into actionable insights, enhancing decision-making and optimizing business processes.
### Quick walkthrough video
Watch this quick walkthrough video to get started:
### Data exploration and visualization
The plugin uses **Superset** as a free data exploration and visualization tool.
You can however use your own BI tool like Tableau, PowerBI etc. as an alternative to Superset.
Use the suggested query structure and logic to ensure accurate reporting and avoid duplicates, even during database updates. Do not just use select on single reporting tables.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/reporting.png)
Apache Superset is an open-source software application for data exploration and data visualization able to handle data at a large scale. It enables users to connect their company's databases and perform data analysis, and easily build charts and assemble dashboards.
Superset is also an SQL IDE, so users can write SQL, join data create datasets and so on.
### Plugin architecture
Here’s an overview of the reporting plugin architecture:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/Reporting_diagram_small2.png)
### Setting up reporting for a process definition
If you want to extract custom business parameters for a process, first you should set up the master reporting data model in the published version of that process. This is done in Designer, as explained below in the [**Reporting Data Model**](#reporting-data-model).
The master reporting data model that is applied to all historical versions is the one belonging to the version that is set as Published. If the currently published version is read-only, you might need to create a new version, create the data model in it, mark it for reporting and set it as published (as explained in the section below).
See [**Reporting Data Model**](#reporting-data-model) section for more details.
This is accomplished by checking the “Use process in reporting” button in the settings / general page of each process version you want to report. Please note that you will not be able to change that setting on read-only versions.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/use_proc_reporting.gif)
It is essential that the currently Published version has this setting enabled, as it is the master version for reporting all historical data.
Only the process instances belonging to reporting-enabled versions will be extracted to the database, using the master data model from the currently Published version.
The reason why you should first set up the data model on the published (or to-be-published) version is that all changes in the data model of the published version that is also marked “use\_in\_reporting” are instantly sent to the reporting plugin, potentially causing all historical process instances to be re-computed multiple times before you are done setting up the data model.
To optimize operations, you should always aim to finish the modifications on the master data model on a version that is either not published, or not marked as "use\_in\_reporting", before marking it as "use\_in\_reporting and ensuring it is published.
See [**Enabling Process Reporting**](#enabling-process-reporting) section for more details.
## Reporting data model
You can only create or modify the reporting data model in non-committed versions, as commited versions are read-only.
1. Open the branch view.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/rep1.png)
2. If the currently published version is read-only, start a new version.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/rep5.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/rep6.png)
3. Make sure the new work-in-progress branch is selected, then go back to the process definition page.
4. Click on the Data Model icon and navigate in the object model to the target parameters.
5. Set up the business data structure to be reported by using the Edit button for each parameter.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/rep8.png)
* Check "Use in reporting" flag.
* Select the parameter type (number, string, Boolean or array).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/rep9.png)
There are three parameter structures that you can report:
**Singleton** or primitive, for which a single value is saved for each process instance. They can be of the types number, string or Boolean and will all be found in the reporting table named `params_{process_alias}`.
**Array of primitives**, in which several rows are saved for each process instance, in a dedicated table (one value per row).
**Array of objects**, in which the system extracts one or more “leaves” from each object of an array of objects. Also saved as a dedicated table.
### Primitive parameters reporting
In the following example, there are 3 simple parameters that will be extracted: `loan_approvalFee`, `loan_currency` and `loan_downPayment`.
They will be reported in the table `params_{process_alias}`, one row per instance.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/rep10.png)
### Arrays of primitives reporting
In the following example, the applicant can submit more than one email address, so each email will be extracted to a separate row in a dedicated table in the reporting database.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/rep11.png)
Extracting arrays is computationally intensive, do not use them if the only purpose is just to aggregate them afterward. Aggregation should be performed in the process, not in the reporting stage.
### Array of objects reporting
In this example, the applicant can have several real estate assets, so a subset of data items (`currentValue`, `mortgageBalance`, `propertyType`) will be extracted for each one of them.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/rep12.png)
Extracting arrays of objects is even more is computationally intensive, do not use them if the only purpose is just to aggregate them afterward. Aggregation should be performed in the process, not in the reporting stage.
## Enabling process reporting
Enable reporting by checking the “Use process in reporting” button on the settings/general page of each process version.
You can also make data model changes on an already published version (if it is not set as read-only), but the effects of the changes are immediate and every periodic refresh will try to re-compute all historical instances to adapt to the changing data model, potentially taking a lot of computing.
* Only the versions marked for reporting will be extracted to the database, using the master data model from the published version.
* Modifying the data model of a process version has no impact until the moment the version becomes published and is also set to "use\_in\_reporting".
* You can add / remove older process versions to be reported by opening the process branch modal, selecting the version, closing the branch modal, and then selecting Settings in the left bar, then in the General tab switching “Use process in reporting” on or off. You might not be able to do this on committed (read-only) versions.
The reporting refresh schedule is set for a fixed interval (currently 5 minutes):
* No processing overlaps are allowed. If processing takes more than 5 minutes, the next processing is automatically postponed until the current one is finished.
* The number of Spark executors is automatically set by the reporting plugin depending on the volume of data to be processed, up to a certain fixed limit that is established in the cloud environment. This limit can be adapted depending on your needs.
* Rebuilding the whole history for a process (if the master data model, process name or the Published version change) typically takes more time. It is better to make these changes after the working hours.
## Process reporting modification scenarios
**Modifying the data model for the Published version** causes all data for that process to be deleted from the database and to be rebuilt from scratch using the Data Model of the current Published version.
* This includes deleting process-specific array tables and rebuilding them, possibly with different names.
* All historical instances from previous versions are also rebuilt using the most recent Published data model.
* This may take a lot of time, so it is better to perform these operations during periods of low system use.
The same thing happens whenever the Published version changes, or if the process name is modified.
**Adding Process Versions**: Adds new rows to existing tables.
**Removing Non-Published Process Versions** from reporting (by unchecking their “Use in reporting” switch in Settings) simply deletes their corresponding rows from the table.
**Removing the Currently Published Version from Reporting**: **Not advisable** as it applies a random older data model and deletes all instances processed with this version.
**Reporting Data Structure Obsolescence and Backwards Compatibility**: The master data model is applied to all historical data from all reported versions. If only the name of a parameter changes, the plugin uses its UUID to map the old name to the new one. However, if the full path changes or the parameter does not exist in an older version, it returns a `Null` value to the database.
## Reporting database
### Main tables
Common fields for joins:
* `inst_id`: Unique identifier for each process instance
* `query_time`: Timestamp for when the plugin extracts data
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/rep20.png)
**Useful fields from instances table**:
* `date_started/finished` timestamps
* `process_alias` (process name with "\_" instead of spaces)
The published version alias will be used for all the reported versions.
* State of the process (started, finished, etc.)
* `context_name`: Useful if the process is started by another process
**Useful fields from current\_nodes table**:
* `node_started/finished` timestamps for the nodes
* `node_name`, `node_type`
* `swimlane` of the current node
* `prev_node_end`: For calculating when a token is waiting for another process branch
**Useful fields from token\_history table**:
Similar to `current_nodes` but includes all nodes in the instance history.
### Parameters and object tables
Common fields, on which the joins are built:
* `inst_id`, unique identifier for each process instance
* `query_time`, recorded at the moment the plugin extracts data from the database
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/reporting30.png)
For the `params_` table, there is a single row per process instance. For arrays and arrays of objects tables, there are multiple rows per process instance.
## Using Superset for data exploration
### Data sources
The **Data** tab represents the sources of all information:
* Databases
* CSV files
* Tables
* Excel files
Reporting plugin can be used with Superset by connecting it with a PostgreSQL DB.
### Charts
Charts represent the output of the information. There are multiple visualization charts/ plugins available.
### Dashboards
With the use of dashboards, you can share persuading flows, show how metrics change in various scenarios and match your company efforts with logical, evidence‐based visual indicators.
### Datasets
Contains all the information for extracting and processing data from the DB, including SQL queries, calculated metrics information, cache settings, etc. Datasets can also be exported / imported.
### Connecting to a database
Before using Superset, ensure you have a PostgreSQL database installed and configured. Follow these guides for setup:
[FlowX Engine DB configuration](../../../../../setup-guides/flowx-engine-setup-guide/engine-setup)
[Reporting DB configuration](../../../../../setup-guides/plugins-setup-guide/reporting-setup)
Read-only users should be used in production in the reporting-plugin cronjob.
To connect Superset to a database, follow the next steps:
1. From the Toolbar, hover your cursor over the **"+"** icon, then select **Data**, and then select **Connect Database**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/connecting_to_db.png)
2. The **Connect a database** window appears. Select the appropriate **database card** (in this case - PostgreSQL).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/connect_db_superset.png)
3. After you selected the DB, click **Connect this database with a SQLAlchemy URI string instead?**.
4. Fill in the **SQLALCHEMY URI** and then click **Connect**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/superset_db_URI.png)
The **SQLAlchemy URI** for **reporting-db** should be in this format: `postgresql://postgres:XXXXXXXXXX@reporting-plugin-postgresql:{{port}}/reporting`.
### Creating and configuring charts
There are multiple ways in which you can create/configure a chart.
#### Creating charts using Datasets tab
To create a Chart using the first method, you must follow the next steps:
1. From the top toolbar, select **Data** and then **Datasets**.
You need to have a dataset added to Superset first. From that particular dataset you can build a visualization.
2. Select the desired **dataset**.
3. On the **explore** page, choose the **visualization type** and click **Create chart**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/superset_add_chart.gif)
4. When you select a dataset, by default **table** visualization type is selected.
To view all the existent visualization types, click **View all charts**, the charts' gallery will open.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/superset_visualization_type.png)
#### Creating charts using Chart gallery
Using the Chart gallery is a useful method when you are not quite sure about which chart will be best for you.
To create a Chart using the second method, you must follow the next steps:
1. Select the **"+"** icon from the top toolbar and choose **Chart**.
2. Choose the **dataset** and **chart type**.
3. Review the description and example graphics of the selected chart, then click **Create new chart**.
If you wish to explore all the chart types available, filter by **All charts**. The charts are also grouped by categories.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/superset_add_chart_second.gif)
### Configuring a Chart
Configure the **Query** fields by dragging and dropping relevant columns to the matching fields. Note that time and query form attributes vary by chart type.
### Exporting/importing a Chart
You can export and import charts to help you analyze your data and manipulate dashboards. To export/import a chart, follow the next steps:
1. Open **Superset** and navigate to **Charts** from the top navigation bar.
2. Select the desired **chart** and click the **breadcrumbs** menu in the top-right corner.
3. Choose an export option: .CSV, .JSON, or Download as image.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/reporting_export_imp.png)
**Table example**:
* **Time** - time related form attributes
* **Query** - query attributes
* **Dimensions** - one or many columns to group by
* **Metrics** - metrics to display
* **Percentage metrics** - metrics for which percentage of total are to be displayed, calculated from only data within the row limit
* **Filters** - metric used for filtering
* **Sort by** - metric used to define how the top series are sorted if a series or row limit is present
* **Row limit** - limits the number of rows that get displayed
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/superset_query.png)
### Creating a dashboard
To create a dashboard follow the next steps:
1. Create a new chart and save it to a new dashboard.
2. To publish, click **Save and go to Dashboard**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/save_dashboard.gif)
For details on how to configure the FlowX.AI reporting plugin, check the following section:
# Languages
You can add a language and use it in document and notification templates.
![Languages](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/languages.png)
On the main screen inside **Languages**, you have the following elements:
* **Code** - not displayed in the end-user interface, but used to assure value uniqueness
* **Name** - the name of the language
* **Default** - you can set a language as **Default** (default values can't be deleted)
When working with [substitution tags](./substitution-tags) or other elements that imply values from other languages defined in the CMS, when running a process, the default values extracted will be the ones marked by the default language.
![Default values](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/lang_default_values.png)
* **Edit** - button used to edit a language
![Edit](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/edit_languages.png)
* **Delete** - button used to delete a language
Before deleting a language make sure that this is not used in any content management configuration.
* **New** - button used to add a new language
### Adding a new language
To add a new language, follow the next steps:
1. Go to **FLOWX Designer** and select the **Content Management** tab.
2. Select **Languages** from the list.
3. Choose a new **language** from the list.
4. Click **Add** after you finish.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/adding_new_language.gif)
# WYSIWYG editor
FlowX Designer's WYSIWYG ("What You See Is What You Get") editor enables you to create and modify notification and document templates without the need for complicated coding from the developers. WYSIWYG editors make the creation/editing of any type of document or notification easier for the end-user.
Displaying how the document will be published or printed on the screen, the user can adjust the text, graphics, photos, or other document/notification elements before generating the final output.
## WYSIWYG Components
### Header
The formatting head of the editor allows users to manipulate/format the content of the document.
### Body
The Body is the main part of the editor where you can edit your template.
After you defined some parameters in the **Data Model** tab, you can type "**#**" in the body to trigger a dropdown where you can choose which one you want to use.
### Source
The **Source** button can be used to switch to the HTML editor. You can use the HTML view/editor as a debugging tool, or you can edit the template directly by writing code here.
![Source Code](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/wysiwyg_source.gif)
## Document Templates
One of the main features of the [document management plugin](./custom-plugins/documents-plugin/documents-plugin-overview) is the ability to generate new documents based on custom templates and prefilled with data related to the current process instance.
![Document template](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/wysiwyg_document_template.png)
## Notification Templates
Notification WYSIWYG body has some additional fields (other than documents template):
* **Type** - that could be either MAIL or SMS (SMS, only if there is an external adapter)
* **Forward on Kafka** - if this box is checked, the notification is not being sent directly by the plugin to the destination, but forwarded to another adapter
![Notification template](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/wysiwyg_notif_template.png)
## Data Model
Using the data model, you can define key pair values (parameters) that will be displayed and reused in the body. Multiple parameters can be added:
* STRING
* NUMBER
* BOOLEAN
* OBJECT
* ARRAY (which has an additional `item` field)
![Data model](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/wysiwyg_data_model.png)
Parameters can be defined as mandatory or not. When you try to generate a template without filling in all the mandatory parameters, the following error message will be displayed: "*Provided data cannot be empty if there are any required properties defined."*
# Third-party components
FlowX.AI uses a number of third-party software components
### Open-source
* [Keycloak](#keycloak)
* [Kafka](#kafka)
* [PostgreSQL](#postgresql)
* [MongoDB](#mongodb)
* [Redis](#redis)
* [NGINX](#nginx)
* [EFK (Elastic Search, Fluentd, Kibana)](#efk-kibana-fluentd-elastic-search)
* [S3 (MinIO)](#s3-minio)
### Not open-source
* [OracleDB](#oracle-db)
### Third-party open-source components supported/tested versions
FlowX.AI supports any version of the third-party components listed as prerequisites.
For optimal performance and reliability, our internal QA process validates new releases using specific versions as indicated in the provided table.
While exploring alternative versions that suit your company's specific requirements, we recommend referring to the compatibility matrix for guidance.
In the unlikely event that you encounter any compatibility issues with FlowX.AI, please open a support ticket [**here**](https://support.flowx.ai/), and our dedicated team will address and resolve any identified bugs following our standard support process.
Compatibility Matrix:
* FlowX Platform: Recommended and tested versions
* Third-Party Components: Supported versions based on specific requirements and client preferences
| FLOWX.AI Platform Version | Component name | Supported/tested versions |
| ------------------------- | ------------------------ | ------------------------- |
| 4.0.0 | Keycloak | 22.x |
| 4.0.0 | Kafka | 3.2.x |
| 4.0.0 | PostgreSQL | 16.2.x |
| 4.0.0 | MongoDB | 7.0.x |
| 4.0.0 | Redis | 7.2.x |
| 4.0.0 | NGINX Ingress Controller | 1.2.x |
| 4.0.0 | Elasticsearch | 7.17.x |
| 4.0.0 | minio | 2024-02-26T09-33-48Z |
### Third-party components supported/tested versions
| FLOWX.AI Platform version | Component name | Supported/tested versions |
| ------------------------- | -------------- | ------------------------- |
| 4.0.0 | OracleDB | 21C/ 21-XE |
### Summary
#### Keycloak
Keycloak is an open-source software product to allow single sign-on with Identity and Access Management aimed at modern applications and services.
[Keycloak documentation](https://www.keycloak.org/documentation)
#### **Kafka**
Apache Kafka is an open-source distributed event streaming platform that can handle a high volume of data and enables you to pass messages from one end-point to another.
Kafka is a unified platform for handling all the real-time data feeds. Kafka supports low latency message delivery and gives a guarantee for fault tolerance in the presence of machine failures. It has the ability to handle a large number of diverse consumers.
Kafka is very fast and performs 2 million writes/sec. Kafka persists all data to the disk, which essentially means that all the writes go to the page cache of the OS (RAM). This makes it very efficient to transfer data from a page cache to a network socket.
[Intro to Kafka](../platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-kafka-concepts)
[Kafka documentation](https://kafka.apache.org/documentation/)
#### PostgreSQL
PostgreSQL, also known as Postgres, is a free and open-source relational database management system emphasizing extensibility and SQL compliance.
[PostgreSQL documentation](https://www.postgresql.org/docs/)
#### MongoDB
MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional [schemas](https://en.wikipedia.org/wiki/Database_schema).
Used by FlowX to store business process data and configuration information on the core/plugin components.
[MongoDB documentation](https://www.mongodb.com/docs/)
#### Redis
Redis is a fast, open-source, in-memory key-value data store that is commonly used as a cache to store frequently accessed data in memory so that applications can be responsive to users.
It delivers sub-millisecond response times enabling millions of requests per second for applications.
It is also be used as a Pub/Sub messaging solution, allowing messages to be passed to channels and for all subscribers to that channel to receive that message. This feature enables information to flow quickly through the platform without using up space in the database as messages are not stored.
It is used by FLOWX.AI for caching the process definitions-related data.
[Intro to Redis](../platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-redis)
[Redis documentation](https://redis.io/docs/)
#### NGINX
Nginx Is a web server that can also be used as a reverse proxy, load balancer, mail proxy and HTTP cache.
FLOWX utilizes the Nginx engine as a load balancer and for routing the web traffic (API calls) from the SPA (single page application) to the backend service, to the engine, and to various plugins.
The FLOWX.AI Designer SPA will use the backend service to manage the platform via REST calls, will use API calls to manage specific content for the plugins, and will use REST and SSE calls to connect to the engine.
[Intro to NGINX](../platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-nginx)
[NGINX documentation](https://nginx.org/en/docs/)
#### EFK (Elastic Search, Fluentd, Kibana)
Elasticsearch is a distributed, RESTful search and analytics engine capable of addressing a growing number of use cases.
As the heart of the Elastic Stack, it centrally stores your data for lightning-fast search, fine‑tuned relevancy, and powerful analytics that scale with ease.
Used by FLOWX.AI in the core component and optionally to allow searching for business process transaction data.
[Elastic stack documentation](https://www.elastic.co/elastic-stack/)
[Fluentd documentation](https://docs.fluentd.org/)
#### Kafka Connect Elasticsearch Service Sink
The Kafka Connect Elasticsearch Service Sink connector moves data from Apache Kafka® to Elasticsearch. It writes data from a topic in Kafka to an index in Elasticsearch. All data for a topic have the same type in Elasticsearch. This allows an independent evolution of schemas for data from different topics. This simplifies the schema evolution because Elasticsearch has one enforcement on mappings; that is, all fields with the same name in the same index must have the same mapping type.
#### S3 (MinIO)
FLOWX.AI uses [Min.IO](http://min.io/) as a cloud storage solution.
[MIN.IO documentation](https://min.io/)
[Docker available here](https://quay.io/repository/minio/minio?tab=tags\&tag=RELEASE.2022-05-26T05-48-41Z)
#### Oracle DB
Oracle Database is a relational database management system (RDBMS).
[Oracle DB documentation](https://www.oracle.com/database/technologies/)
#### Superset
Apache Superset is a business intelligence web application. It helps users to explore and visualize their data, from simple pie charts to detailed dashboards.
[Superset](https://superset.apache.org/docs/intro)
# Business filters
An optional attribute, from the authorization token, that can be set in order to restrict access to process instances based on a business specific value (ex. bank branch name).
Using business filters we can make sure only the allowed users, with the same attribute, can access a [**process instance**](../../building-blocks/process/process-instance).
In some cases it might be necessary to restrict access to process nodes based on certain [**business rules**](../../building-blocks/actions/business-rule-action/business-rule-action), for example only users from a specific bank branch can view the process instances started from that branch. This can be done by using business filters.
Before they can be used in the process definition the business filter attributes need to be set in the identity management platform. They have to be configured as a list of filters and should be made available on the authorization token. Application users will also have to be assigned this value.
When this filter needs to be applied, the process definition should include nodes with actions that will store the current business filter value to a custom `task.businessFilters` key on process parameters.
If this value is set in the process instance parameters, only users that have the correct business filter attribute will be able to interact with that process instance.
# Swimlanes
Swimlanes provide a way of grouping process nodes by process participants.
Using swimlanes we can make sure only certain user roles have access to certain process nodes.
In certain scenarios, it is necessary to restrict access to specific process [**nodes**](../../building-blocks/node/) based on user roles. This can be achieved by organizing nodes into different swimlanes.
Each swimlane can be configured to grant access only to users with specific roles defined in the chosen identity provider platform.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/multiple_swimlanes.png)
Depending on the type of node added within a swimlane, only users with the corresponding swimlane roles will have the ability to initiate process instances, view process instances, and perform actions on them.
[Scopes and roles for managing processes](../../../setup-guides/flowx-engine-setup-guide/configuring-access-roles-for-processes)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/swimlanes_permissions.gif)
When creating a new process definition, a default swimlane will automatically be added.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/swimlanes_default.gif)
As the token moves from one node to the next, it may transition between swimlanes. If a user interacting with the process instance no longer has access to the new swimlane, they will observe the process in read-only mode and will be unable to interact with it until the token returns to a swimlane they have access to.
Users will receive notifications when they can no longer interact with the process or when they can resume actions on it.
# Android SDK
## Android project requirements
System requirements:
* **minSdk = 26** (Android 8.0)
* **compileSdk = 34**
The SDK library was build using:
* **[Android Gradle Plugin](https://developer.android.com/build/releases/gradle-plugin) 8.1.4**
* **[Kotlin](https://kotlinlang.org/) 1.9.24**
## Installing the library
1. Add the maven repository in your project's `settings.gradle.kts` file:
```kotlin
dependencyResolutionManagement {
...
repositories {
...
maven {
url = uri("https://nexus-jx.dev.rd.flowx.ai/repository/flowx-maven-releases/")
credentials {
username = "your_username"
password = "your_password"
}
}
}
}
```
2. Add the library as a dependency in your `app/build.gradle.kts` file:
```kotlin
dependencies {
...
implementation("ai.flowx.android:android-sdk:4.0.0")
...
}
```
### Library dependencies
Impactful dependencies:
* **[Koin](https://insert-koin.io/) 3.2.2**, including the implementation for **[Koin Context Isolation](https://insert-koin.io/docs/reference/koin-core/context-isolation/)**
* **[Compose BOM](https://developer.android.com/jetpack/compose/bom/bom-mapping) 2024.06.00** + **[Compose Compiler](https://developer.android.com/jetpack/androidx/releases/compose-compiler) 1.5.14**
* **[Accompanist](https://google.github.io/accompanist/) 0.32.0**
* **[Kotlin Coroutines](https://kotlinlang.org/docs/coroutines-overview.html) 1.8.0**
* **[OkHttp BOM](https://square.github.io/okhttp/) 4.11.0**
* **[Retrofit](https://square.github.io/retrofit/) 2.9.0**
* **[Coil Image Library](https://coil-kt.github.io/coil/) 2.5.0**
* **[Gson](https://github.com/google/gson) 2.11.0**
### Public API
The SDK library is managed through the `FlowxSdkApi` singleton instance, which exposes the following methods:
| Name | Description | Definition |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `init` | Initializes the FlowX SDK. Must be called in your application's `onCreate()` | `fun init(context: Context, config: SdkConfig, accessTokenProvider: FlowxSdkApi.Companion.AccessTokenProvider? = null, customComponentsProvider: CustomComponentsProvider? = null, customStepperHeaderProvider: CustomStepperHeaderProvider? = null)` |
| `setAccessTokenProvider` | Updates the access token provider (i.e. a functional interface) inside the renderer | `fun setAccessTokenProvider(accessTokenProvider: FlowxSdkApi.Companion.AccessTokenProvider)` |
| `setupTheme` | Sets up the theme to be used when rendering a process | `fun setupTheme(themeUuid: String, fallbackThemeJsonFileAssetsPath: String? = null, @MainThread onCompletion: () -> Unit)` |
| `changeLocaleSettings` | Changes the current locale settings (i.e. locale and language) | `fun changeLocaleSettings(locale: Locale, language: String)` |
| `startProcess` | Starts a FlowX process instance, by returning a `@Composable` function where the process is rendered. | `fun startProcess(processName: String, params: JSONObject = JSONObject(), isModal: Boolean = false, closeModalFunc: ((processName: String) -> Unit)? = null): @Composable () -> Unit` |
| `continueProcess` | Continues an existing FlowX process instance, by returning a `@Composable` function where the process is rendered. | `fun continueProcess(processUuid: String, isModal: Boolean = false, closeModalFunc: ((processName: String) -> Unit)? = null): @Composable () -> Unit` |
| `executeAction` | Runs an action from a custom component | `fun executeAction(action: CustomComponentAction, params: JSONObject? = null)` |
| `getMediaResourceUrl` | Extracts a media item URL needed to populate the UI of a custom component | `fun getMediaResourceUrl(key: String): String?` |
| `replaceSubstitutionTag` | Extracts a substitution tag value needed to populate the UI of a custom component | `fun replaceSubstitutionTag(key: String): String` |
## Configuring the library
To configure the SDK, call the `init` method in your project's application class `onCreate()` method:
```kotlin
fun init(
context: Context,
config: SdkConfig,
accessTokenProvider: AccessTokenProvider? = null,
customComponentsProvider: CustomComponentsProvider? = null,
customStepperHeaderProvider: CustomStepperHeaderProvider? = null,
)
```
#### Parameters
| Name | Description | Type | Requirement |
| ----------------------------- | ---------------------------------------------------------- | -------------------------------------------------------------------- | ----------------------------- |
| `context` | Android application `Context` | `Context` | Mandatory |
| `config` | SDK configuration parameters | `ai.flowx.android.sdk.process.model.SdkConfig` | Mandatory |
| `accessTokenProvider` | Functional interface provider for passing the access token | `ai.flowx.android.sdk.FlowxSdkApi.Companion.AccessTokenProvder?` | Optional. Defaults to `null`. |
| `customComponentsProvider` | Provider for the `@Composable`/`View` custom components | `ai.flowx.android.sdk.component.custom.CustomComponentsProvider?` | Optional. Defaults to `null`. |
| `customStepperHeaderProvider` | Provider for the `@Composable` custom stepper header view | `ai.flowx.android.sdk.component.custom.CustomStepperHeaderProvider?` | Optional. Defaults to `null`. |
• Providing the `access token` is explained in the [authentication](#authentication) section.
• The `custom components` implementation is explained in [its own section](#custom-components).
• The implementation for providing a `custom view for the header` of the [Stepper](../docs/building-blocks/process/navigation-areas#stepper) component is detailed in [its own section](#custom-header-view-for-the-stepper-component).
#### Sample
```kotlin
class MyApplication : Application() {
override fun onCreate() {
super.onCreate()
initFlowXSdk()
}
private fun initFlowXSdk() {
FlowxSdkApi.getInstance().init(
context = applicationContext,
config = SdkConfig(
baseUrl = "URL to FlowX backend",
imageBaseUrl = "URL to FlowX CMS Media Library",
enginePath = "some_path",
language = "en",
locale = Locale.getDefault(),
validators = mapOf("exact_25_in_length" to { it.length == 25 }),
enableLog = false,
),
accessTokenProvider = null, // null by default; can be set later, depending on the existing authentication logic
customComponentsProvider = object : CustomComponentsProvider {...},
customStepperHeaderProvider = object : CustomStepperHeaderProvider {...},
)
}
}
```
The configuration properties that should be passed as `SdkConfig` data for the `config` parameter above are:
| Name | Description | Type | Requirement |
| -------------- | ------------------------------------------------------------------- | ----------------------------------- | ------------------------------------------- |
| `baseUrl` | URL to connect to the FlowX back-end environment | `String` | Mandatory |
| `imageBaseUrl` | URL to connect to the FlowX Media Library module of the CMS | `String` | Mandatory |
| `enginePath` | URL path segment used to identify the process engine service | `String` | Mandatory |
| `language` | The language used for retrieving enumerations and substitution tags | `String` | Optional. Defaults to `en`. |
| `locale` | The locale used for date, number and currency formatting | `java.util.Locale` | Optional. Defaults to `Locale.getDefault()` |
| `validators` | Custom validators for form elements | `Map Boolean>?` | Optional. |
| `enableLog` | Flag indicating if logs should be printed | `Boolean` | Optional. Defaults to `false` |
#### Custom validators
The `custom validators` map is a collection of lambda functions, referenced by *name* (i.e. the value of the `key` in this map), each returning a `Boolean` based on the `String` which needs to be validated.
For a custom validator to be evaluated for a form field, its *name* must be specified in the form field process definition.
By looking at the example from above:
```kotlin
mapOf("exact_25_in_length" to { it.length == 25 })
```
if a form element should be validated using this lambda function, a custom validator named `"exact_25_in_length"` should be specified in the process definition.
## Using the library
### Authentication
To be able to use the SDK, **authentication is required**. Therefore, before calling any other method on the singleton instance, make sure that the access token provider is set by calling:
```kotlin
FlowxSdkApi.getInstance().setAccessTokenProvider(accessTokenProvider = { "your access token" })
```
The lambda passed in as parameter has the `ai.flowx.android.sdk.FlowxSdkApi.Companion.AccessTokenProvider` type, which is actually a functional interface defined like this:
```kotlin
fun interface AccessTokenProvider {
fun get(): String
}
```
Whenever the access token changes based on your own authentication logic, it must be updated in the renderer by calling the `setAccessTokenProvider` method again.
### Theming
Prior setting up the theme, make sure the `access token provider` was set. Check the [authentication](#authentication) section for details.
To be able to use styled components while rendering a process, the theming mechanism must be invoked by calling the `suspend`-ing `setupTheme(...)` method over the singleton instance of the SDK:
```kotlin
suspend fun setupTheme(
themeUuid: String,
fallbackThemeJsonFileAssetsPath: String? = null,
@MainThread onCompletion: () -> Unit
)
```
#### Parameters
| Name | Description | Type | Requirement |
| --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------ | ---------------------------- |
| `themeUuid` | UUID string of the theme configured in FlowX Designer | `String` | Mandatory. Can be empty |
| `fallbackThemeJsonFileAssetsPath` | Android asset relative path to the corresponding JSON file to be used as fallback, in case fetching the theme fails and there is no cached version available | `String?` | Optional. Defaults to `null` |
| `onCompletion` | `@MainThread` invoked closure, called when setting up the theme completes | `() -> Unit` | Mandatory |
If the `themeUuid` parameter value is empty (`""`), no theme will be fetched, and the mechanism will rely only on the fallback file, if set.
If the `fallbackThemeJsonFileAssetsPath` parameter value is `null`, there will be no fallback mechanism set in place, meaning if fetching the theme fails, the redered process will have no style applied over it's displayed components.
The SDK caches the fetched themes, so if a theme fetch fails, a cached version will be used, if available. Otherwise, it will use the file given as fallback.
#### Sample
```kotlin
viewModelScope.launch {
FlowxSdkApi.getInstance().setupTheme(
themeUuid = "some uuid string",
fallbackThemeJsonFileAssetsPath = "theme/a_fallback_theme.json",
) {
// theme setup complete
// TODO specific logic
}
}
```
The `fallbackThemeJsonFileAssetsPath` always search for files under your project's `assets/` directory, meaning the example parameter value is translated to `file://android_asset/theme/a_fallback_theme.json` before being evaluated.
Do not [start](#start-a-flowx-process) or [resume](#resume-a-flowx-process) a process before the completion of the theme setup mechanism.
### Changing current locale settings
The current `locale` and `language` can be also changed after the [initial setup](#configuring-the-library), by calling the `changeLocaleSettings` function:
```kotlin
fun changeLocaleSettings(locale: Locale, language: String)
```
#### Parameters
| Name | Description | Type | Requirement |
| ---------- | ----------------------------- | ------------------ | ----------- |
| `locale` | The new locale | `java.util.Locale` | Mandatory |
| `language` | The code for the new language | `String` | Mandatory |
**Do not change the locale or the language while a process is rendered.**
The change is successful only if made before [starting](#start-a-flowx-process) or [resuming](#resume-a-flowx-process) a process.
#### Sample
```kotlin
FlowxSdkApi.getInstance().changeLocaleSettings(locale = Locale("en", "US"), language = "en")
```
### Start a FlowX process
Prior starting a process, make sure the [authentication](#authentication) and [theming](#theming) were correctly set up
After performing all the above steps and all the prerequisites are fulfilled, a new instance of a FlowX process can be started, by using the `startProcess` function:
```kotlin
fun startProcess(
applicationUuid: String,
processName: String,
params: JSONObject = JSONObject(),
isModal: Boolean = false,
onProcessEnded: (() -> Unit)? = null,
closeModalFunc: ((processName: String) -> Unit)? = null,
): @Composable () -> Unit
```
#### Parameters
| Name | Description | Type | Requirement |
| ----------------- | -------------------------------------------------------------------------------------------------- | ---------------------------------- | --------------------------------------------------- |
| `applicationUuid` | The uuid string of the application containing the process to be started | `String` | Mandatory |
| `processName` | The name of the process | `String` | Mandatory |
| `params` | The starting params for the process, if any | `JSONObject` | Optional. If omitted, if defaults to `JSONObject()` |
| `isModal` | Flag indicating whether the process can be closed at anytime by tapping the top-right close button | `Boolean` | Optional. It defaults to `false`. |
| `onProcessEnded` | Lambda function where you can do additional processing when the started process ends | `(() -> Unit)?` | Optional. It defaults to `null`. |
| `closeModalFunc` | Lambda function where you should handle closing the process when `isModal` flag is `true` | `((processName: String) -> Unit)?` | Optional. It defaults to `null`. |
The returned **[@Composable](https://developer.android.com/reference/kotlin/androidx/compose/runtime/Composable)** function must be included in its own **[Activity](https://developer.android.com/reference/android/app/Activity)**, which is part of (controlled and maintained by) the container application.
This wrapper activity must display only the `@Composable` returned from the SDK (i.e. it occupies the whole activity screen space).
#### Sample
```kotlin
class ProcessActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
...
setContent {
FlowxSdkApi.getInstance().startProcess(
applicationUuid = "your application uuid",
processName = "your process name",
params: JSONObject = JSONObject(),
isModal = true,
onProcessEnded = {
// NOTE: possible processing could involve doing something in the container app (i.e. navigating to a different screen)
},
closeModalFunc = { processName ->
// NOTE: possible handling could involve doing something differently based on the `processName` value
},
).invoke()
}
}
...
}
```
### Resume a FlowX process
Prior resuming process, make sure the [authentication](#authentication) and [theming](#theming) were correctly set up
To resume an existing instance of a FlowX process, after fulfilling all the prerequisites, use the `continueProcess` function:
```kotlin
fun continueProcess(
processUuid: String,
isModal: Boolean = false,
onProcessEnded: (() -> Unit)? = null,
closeModalFunc: ((processName: String) -> Unit)? = null,
): @Composable () -> Unit
```
#### Parameters
| Name | Description | Type | Requirement |
| ---------------- | -------------------------------------------------------------------------------------------------- | ---------------------------------- | --------------------------------- |
| `processUuid` | The UUID string of the process | `String` | Mandatory |
| `isModal` | Flag indicating whether the process can be closed at anytime by tapping the top-right close button | `Boolean` | Optional. It defaults to `false`. |
| `onProcessEnded` | Lambda function where you can do additional processing when the continued process ends | `(() -> Unit)?` | Optional. It defaults to `null`. |
| `closeModalFunc` | Lambda function where you should handle closing the process when `isModal` flag is `true` | `((processName: String) -> Unit)?` | Optional. It defaults to `null`. |
The returned **[@Composable](https://developer.android.com/reference/kotlin/androidx/compose/runtime/Composable)** function must be included in its own **[Activity](https://developer.android.com/reference/android/app/Activity)**, which is part of (controlled and maintained by) the container application.
This wrapper activity must display only the `@Composable` returned from the SDK (i.e. it occupies the whole activity screen space).
#### Sample
```kotlin
class ProcessActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
...
setContent {
FlowxSdkApi.getInstance().continueProcess(
processUuid = "some process UUID string",
isModal = true,
onProcessEnded = {
// NOTE: possible processing could involve doing something in the container app (i.e. navigating to a different screen)
},
closeModalFunc = { processName ->
// NOTE: possible handling could involve doing something differently based on the `processName` value
},
).invoke()
}
}
...
}
```
## Custom components
The container application should decide which custom component view to provide using the `componentIdentifier` configured in the UI designer.
A custom component receives `data` to populate the view and `actions` available to execute, as described below.
To handle custom components, an *implementation* of the `CustomComponentsProvider` interface should be passed as a parameter when initializing the SDK:
```kotlin
interface CustomComponentsProvider {
fun provideCustomComposableComponent(): CustomComposableComponent?
fun provideCustomViewComponent(): CustomViewComponent?
}
```
There are two methods to provide a custom component:
1. by implementing the [CustomComposableComponent](#customcomposablecomponent) interface
2. by implementing the [CustomViewComponent](#customviewcomponent) interface
#### Sample
```kotlin
class CustomComponentsProviderImpl : CustomComponentsProvider {
override fun provideCustomComposableComponent(): CustomComposableComponent? {
return object : CustomComposableComponent {...}
}
override fun provideCustomViewComponent(): CustomViewComponent? {
return object : CustomViewComponent {...}
}
}
```
### CustomComposableComponent
To provide the custom component as a [@Composable](https://developer.android.com/reference/kotlin/androidx/compose/runtime/Composable) function, you have to implement the `CustomComposableComponent` interface:
```kotlin
interface CustomComposableComponent {
fun provideCustomComposable(componentIdentifier: String): CustomComposable
}
```
The returned `CustomComposable` object is an interface defined like this:
```kotlin
interface CustomComposable {
// `true` for the custom components that are implemented and can be handled
// `false` otherwise
val isDefined: Boolean
// `@Composable` definitions for the custom components that can be handled
val composable: @Composable () -> Unit
/**
* Called when the data is available for the custom component
* (i.e. when the User Task that contains the custom component is displayed)
*
* @param data used to populate the custom component
*/
fun populateUi(data: Any?)
/**
* Called when the actions are available for the custom component
* (i.e. when the User Task that contains the custom component is displayed)
*
* @param actions that need to be attached to the custom component (e.g. onClick events)
*/
fun populateUi(actions: Map)
}
```
The value for the `data` parameter received in the `populateUi(data: Any?)` could be:
* `Boolean`
* `String`
* `java.lang.Number`
* `org.json.JSONObject`
* `org.json.JSONArray`
The appropriate way to check and cast the data accordingly to the needs must belong to the implementation details of the custom component.
#### Sample
```kotlin
override fun provideCustomComposableComponent(): CustomComposableComponent? {
return object : CustomComposableComponent {
override fun provideCustomComposable(componentIdentifier: String) = object : CustomComposable {
override val isDefined: Boolean = when (componentIdentifier) {
"some custom component identifier" -> true
"other custom component identifier" -> true
else -> false
}
override val composable: @Composable () -> Unit = {
when (componentIdentifier) {
"some custom component identifier" -> { /* add some @Composable implementation */ }
"other custom component identifier" -> { /* add other @Composable implementation */ }
}
}
override fun populateUi(data: Any?) {
// extract the necessary data to be used for displaying the custom components
}
override fun populateUi(actions: Map) {
// extract the available actions that may be executed from the custom components
}
}
}
}
```
### CustomViewComponent
To provide the custom component as a classical Android [View](https://developer.android.com/reference/android/view/View) function, you have to implement the `CustomViewComponent` interface:
```kotlin
interface CustomViewComponent {
fun provideCustomView(componentIdentifier: String): CustomView
}
```
The returned `CustomView` object is an interface defined like this:
```kotlin
interface CustomView {
// `true` for the custom components that are implemented and can be handled
// `false` otherwise
val isDefined: Boolean
/**
* returns the `View`s for the custom components that can be handled
*/
fun getView(context: Context): View
/**
* Called when the data is available for the custom component
* (i.e. when the User Task that contains the custom component is displayed)
*
* @param data used to populate the custom component
*/
fun populateUi(data: Any?)
/**
* Called when the actions are available for the custom component
* (i.e. when the User Task that contains the custom component is displayed)
*
* @param actions that need to be attached to the custom component (e.g. onClick events)
*/
fun populateUi(actions: Map)
}
```
The value for the `data` parameter received in the `populateUi(data: Any?)` could be:
* `Boolean`
* `String`
* `java.lang.Number`
* `org.json.JSONObject`
* `org.json.JSONArray`
The appropriate way to check and cast the data accordingly to the needs must belong to the implementation details of the custom component.
#### Sample
```kotlin
override fun provideCustomViewComponent(): CustomViewComponent? {
return object : CustomViewComponent {
override fun provideCustomView(componentIdentifier: String) = object : CustomView {
override val isDefined: Boolean = when (componentIdentifier) {
"some custom component identifier" -> true
"other custom component identifier" -> true
else -> false
}
override fun getView(context: Context): View {
return when (componentIdentifier) {
"some custom component identifier" -> { /* return some View */ }
"other custom component identifier" -> { /* return other View */ }
}
}
override fun populateUi(data: Any?) {
// extract the necessary data to be used for displaying the custom components
}
override fun populateUi(actions: Map) {
// extract the available actions that may be executed from the custom components
}
}
}
}
```
### Execute action
The custom components which the container app provides may contain FlowX actions available for execution.
These actions are received through the `actions` parameter of the `populateUi(actions: Map)` method.
In order to run an action (i.e. on a click of a button in the custom component) you need to call the `executeAction` method:
```kotlin
fun executeAction(action: CustomComponentAction, params: JSONObject? = null)
```
#### Parameters
| Name | Description | Type | Requirement |
| -------- | --------------------------------------------------------------------------- | ------------------------------------------------------------- | ------------------------------- |
| `action` | Action object extracted from the `actions` received in the custom component | `ai.flowx.android.sdk.component.custom.CustomComponentAction` | Mandatory |
| `params` | Parameters needed to execute the `action` | `JSONObject?` | Optional. It defaults to `null` |
### Get a substitution tag value by key
```kotlin
fun replaceSubstitutionTag(key: String): String
```
All substitution tags will be retrieved by the SDK before starting the process and will be stored in memory.
Whenever the container app needs a substitution tag value for populating the UI of the custom components, it can request the substitution tag using the method above, by providing the `key`.
It returns:
* the key's counterpart, if the `key` is valid and found
* the empty string, if the `key` is valid, but not found
* the unaltered string, if the key has the wrong format (i.e. not starting with `@@`)
### Get a media item url by key
```kotlin
fun getMediaResourceUrl(key: String): String?
```
All media items will be retrieved by the SDK before starting the process and will be stored in memory.
Whenever the container app needs a media item url for populating the UI of the custom components, it can request the url using the method above, by providing the `key`.
It returns the `URL` string of the media resource, or `null`, if not found.
## Custom header view for the [STEPPER](../docs/building-blocks/process/navigation-areas#stepper) component
The container application can opt for providing a custom view in order to be used, for all the [Stepper](../docs/building-blocks/process/navigation-areas#stepper) components, as a replacement for the built-in header.
The custom view receives `data` to populate its UI, as described below.
To provide a custom header for the [Stepper](../docs/building-blocks/process/navigation-areas#stepper), an *implementation* of the `CustomStepperHeaderProvider` interface should be passed as a parameter when initializing the SDK:
```kotlin
interface CustomStepperHeaderProvider {
fun provideCustomComposableStepperHeader(): CustomComposableStepperHeader?
}
```
As opposed to the [Custom components](#custom-compoments), the only supported way is by providing the view as a [@Composable](https://developer.android.com/reference/kotlin/androidx/compose/runtime/Composable) function.
#### Sample
```kotlin
class CustomStepperHeaderProviderImpl : CustomStepperHeaderProvider {
override fun provideCustomComposableStepperHeader(): CustomComposableStepperHeader? {
return object : CustomComposableStepperHeader {...}
}
}
```
### CustomComposableStepperHeader
To provide the custom header view as a [@Composable](https://developer.android.com/reference/kotlin/androidx/compose/runtime/Composable) function, you have to implement the `CustomComposableStepperHeader` interface:
```kotlin
interface CustomComposableStepperHeader {
fun provideComposableStepperHeader(): ComposableStepperHeader
}
```
The returned `ComposableStepperHeader` object is an interface defined like this:
```kotlin
interface ComposableStepperHeader {
/**
* `@Composable` definition for the custom header view
* The received argument contains the stepper header necessary data to render the view.
*/
val composable: @Composable (data: CustomStepperHeaderData) -> Unit
}
```
The value for the `data` parameter received as function argument is an interface defined like this:
```kotlin
interface CustomStepperHeaderData {
// title for the current step; can be empty or null
val stepTitle: String?
// title for the current selected substep; optional;
// can be empty ("") if not defined or `null` if currently there is no selected substep
val substepTitle: String?
// 1-based index of the current step
val step: Int
// total number of steps
val totalSteps: Int
// 1-based index of the current substep; can be `null` when there are no defined substeps
val substep: Int?
// total number of substeps in the current step; can be `null` or `0`
val totalSubsteps: Int?
}
```
#### Sample
```kotlin
override fun provideComposableStepperHeader(): ComposableStepperHeader? {
return object : ComposableStepperHeader {
override val composable: @Composable (data: CustomStepperHeaderData) -> Unit
get() = @Composable { data ->
/* add some @Composable implementation which displays `data` */
}
}
}
```
## Known issues
* shadows are rendered only on **Android >= 28** having [hardware acceleration](https://developer.android.com/topic/performance/hardware-accel) **enabled**
* there is no support yet for [subprocesses](../docs/building-blocks/process/subprocess) started using the [Call Activity node](../docs/building-blocks/node/call-subprocess-tasks/call-activity-node) when configuring a [TabBar](../docs/building-blocks/process/navigation-areas#tab-bar) [Navigation Area](../docs/building-blocks/process/navigation-areas)
{/*- only **[PORTRAIT](https://developer.android.com/guide/topics/manifest/activity-element#screen)** orientation is supported for now*/}
{/*- there is no support for **[Dark Mode](https://developer.android.com/develop/ui/views/theming/darktheme)** yet*/}
{/*- **[CONTAINER](../docs/building-blocks/node/milestone-node.md#container)** milestone nodes are not supported yet*/}
{/*- can not run multiple processes in parallel (e.g. in a Bottom Tab Navigation)*/}
# Angular SDK
FlowxProcessRenderer is a low code library designed to render UI configured via the Flowx Process Editor.
**Breaking changes**: Starting with version 4.0 the ui-sdk will no longer expect the authToken to be present in the LOCAL\_STORAGE. Instead, the authToken will be passed as an input to the flx-process-renderer component. This is mandatory for the SSE to work properly.
## Prerequisites
* Node.js min version 18 - [**Download Node.js**](https://nodejs.org/en/blog/release/v18.20.4)
* Angular CLI version 17. Install Angular CLI globally using the following command:
```npm
npm install -g @angular/cli
```
This will allow you to run ng related commands from the terminal.
## Angular project requirements
Your app MUST be created using the NG app from the @angular/cli\~17 package. It also MUST use SCSS for styling.
```npm
npm install -g @angular/cli@17
ng new my-flowx-app
```
To install the npm libraries provided by FLOWX you will need to obtain access to the private FLOWX Nexus registry. Please consult with your project DevOps.
The library uses Angular version **@angular\~18**, **npm v10.8.0** and **node v18.20*.4*.
If you are using an older version of Angular (for example, v16), please consult the following link for update instructions:
[**Update Angular from v16.0 to v17.0**](https://angular.dev/update-guide?v=16.0-17.0\&l=1)
## Installing the library
Use the following command to install the **renderer** library and its required dependencies:
```bash
npm install \
@flowx/ui-sdk@5.9.8 \
@flowx/ui-toolkit@5.9.8 \
@flowx/ui-theme@5.9.8 \
vanillajs-datepicker@1.3.4 \
@angular/flex-layout@15.0.0-beta.42 \
@angular/cdk@17.3.9 \
ng2-pdfjs-viewer@17.0.3 \
date-fns \
inputmask
```
A few configurations are needed in the projects `angular.json`:
* in order to successfully link the pdf viewer, add the following declaration in the assets property:
```json
{
"glob": "**/*",
"input": "node_modules/ng2-pdfjs-viewer/pdfjs",
"output": "/assets/pdfjs"
}
```
## Initial setup
Once installed, `FlxProcessModule` will be imported in the `AppModule` as `FlxProcessModule.forRoot({})`.
You **MUST** also import the dependencies of `FlxProcessModule`: `HttpClientModule` from `@angular/common/http`.
### Theming
Component theming is done through the `@flowx/ui-theme` library. The theme id is a required input for the renderer SDK component and is used to fetch the theme configuration. The id can be obtained from the admin panel in the themes section.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/2024-04-08%2013.45.10.gif)
### Authorization
Every request from the **FlowX** renderer SDK will be made using the **HttpClientModule** of the client app, which means those requests will go through every interceptor you define here. This is most important to know when building the auth method as it will be the job of the client app to intercept and decorate the requests with the necessary auth info (eg. `Authorziation: Bearer ...`).
It's the responsibility of the client app to implement the authorization flow (using the **OpenID Connect** standard). The renderer SDK will expect the authToken to be passed to the `flx-process-renderer` as an input.
```typescript
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { HttpClientModule, HTTP_INTERCEPTORS } from '@angular/common/http';
import { FlxProcessModule } from '@flowx/ui-sdk';
import {AppRoutingModule} from './app-routing.module';
import {AppComponent} from './app.component';
@NgModule({
declarations: [
AppComponent,
],
imports: [
BrowserModule,
AppRoutingModule,
// will be used by the renderer SDK to make requests
HttpClientModule,
// needed by the renderer SDK
FlxProcessModule.forRoot({
components: {},
services: {},
}),
],
// this interceptor with decorate the requests with the Authorization header
providers: [
{ provide: HTTP_INTERCEPTORS, useClass: AuthInterceptor, multi: true },
],
bootstrap: [AppComponent]
})
export class AppModule {}
```
The `forRoot()` call is required in the application module where the process will be rendered. The `forRoot()` method accepts a config argument where you can pass extra config info, register a **custom component**, **service**, or **custom validators**.
**Custom components** will be referenced by name when creating the template config for a user task.
**Custom validators** will be referenced by name (`currentOrLastYear`) in the template config panel in the validators section of each generated form field.
```typescript
// example with custom component and custom validator
FlxProcessModule.forRoot({
components: {
YourCustomComponentIdenfier: CustomComponentInstance,
},
services: {
NomenclatorService,
LocalDataStoreService,
},
validators: {currentOrLastYear },
})
// example of a custom validator that restricts data selection to
// the current or the previous year
currentOrLastYear: function currentOrLastYear(AC: AbstractControl): { [key: string]: any } {
if (!AC) {
return null;
}
const yearDate = moment(AC.value, YEAR_FORMAT, true);
const currentDateYear = moment(new Date()).startOf('year');
const lastYear = moment(new Date()).subtract(1, 'year').startOf('year');
if (!yearDate.isSame(currentDateYear) && !yearDate.isSame(lastYear)) {
return { currentOrLastYear: true };
}
return null;
}
```
The error that the validator returns **MUST** match the validator name.
The entry point of the library is the `` component. A list of accepted inputs is found below:
```
```
**Parameters**:
| Name | Description | Type | Mandatory | Default value | Example |
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------- | --------- | ------------- | ------------------------------------------------ |
| apiUrl | Your base url | string | true | - | [https://yourDomain.dev](https://yourdomain.dev) |
| processApiPath | Process subpath | string | true | - | onboarding |
| authToken | Authorization token | string | true | - | 'eyJhbGciOiJSUzI1NiIsIn....' |
| themeId | Theme id used to style the process. Can be obtained from the themes section in the admin | string | true | - | '123-456-789' |
| processName | Identifies a process | string | true | - | client\_identification |
| processStartData | Data required to start the process | json | true | - | `{ "firstName": "John", "lastName": "Smith"}` |
| debugLogs | When set to true this will print WS messages in the console | boolean | false | false | - |
| language | Language used to localize the application. | string | false | ro-RO | - |
| keepState |
By default all process data is reset when the process renderer component gets destroyed. Setting this to true will keep process data even if the viewport gets destroyed
| boolean | false | false | - |
| isDraft | When true allows starting a process in draft state. \*Note that isDraft = true requires that processName be the **id** (number) of the process and NOT the name. | boolean | false | false | - |
| legacyHttpVersion | Set this to `true` only for HTTP versions \< 2 in order for SSE to work properly. Can be omitted otherwise. | boolean | false | false | - |
#### Data and actions
Custom components will be hydrated with data through the \$data input observable which must be defined in the custom component class.
```typescript
@Component({
selector: 'my-custom-component',
templateUrl: './custom-component.component.html',
styleUrls: ['./custom-component.component.scss'],
})
export class CustomComponentComponent {
@Input() data$: Observable;
}
```
Component actions are always found under `data` -> `actionsFn` key.
Action names are configurable via the process editor.
```typescript
# data object example
data: {
actionsFn: {
action_one: () => void;
action_two: () => void; }
}
```
#### Interacting with the process
Data from the process is communicated via **Server Send Event** protocol under the following keys:
| Name | Description | Example | |
| --------------- | :--------------------------------------------------------------------------------------: | :-----: | - |
| Data | data updates for process model bound to default/custom components | | |
| ProcessMetadata | updates about process metadata, ex: progress update, data about how to render components | | |
| RunAction | instructs the UI to perform the given action | | |
#### Task management component
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/renderer_task_mngment.png)
The `flx-task-management` component is found in the `FlxTaskManagementModule`. In order to have access to it, import the module where needed:
```bash
import {FlxProcessModule} from '@flowx/ui-sdk';
@NgModule({
declarations: [
...,
],
imports: [
...,
FlxTaskManagementModule
],
})
export class MyModule {}
```
Then in the template:
```xml
```
**Parameters**:
| Name | Description | Type | Default | Mandatory | Example |
| --------------- | -------------------------------------- | ------ | ---------- | --------- | ------------------------------------------------------------ |
| apiUrl | Endpoint where the tasks are available | string | - | true | [https://yourDomain.dev/tasks](https://yourDomain.dev/tasks) |
| authToken | Authorization token | string | - | true | |
| title | Table header value | string | Activities | false | Tasks |
| pollingInterval | Interval for polling task updates | number | 5000 ms | false | 10000 |
For basic styling to work the following configuration is needed in the angular.json file:
```bash
"stylePreprocessorOptions": {
"includePaths": [
"./node_modules/@flowx/ui-sdk/src/assets/scss"
...
},
```
### Development
If you want to start the designer app and the flx-process-renderer library in development mode (no need to recompile the lib for every change) run the following command:
```bash
npm run start:designer
```
When modifying the library source code and testing it inside the designer app use the following command which rebuilds the libraries, recreates the link between the library and the designer app and recompiles the designer app:
`./start_with_build_lib.sh`
Remember to test the final version of the code by building and bundling the renderer library to check that everything works e2e.
Trying to use this lib with npm link from another app will most probably fail. If (when) that happens, there are two alternatives that you can use:
1. Use the build-and-sync.sh script, that builds the lib, removes the current build from the client app **node\_modules** and copies they newly build lib to the node\_modules dir of the client app:
```
./build-and-sync.sh ${path to the client app root}
# example (the client app is demo-web):
./build-and-sync.sh ../../demo-web
```
NOTE: This method uses under the hood the build-and-sync.sh script from the first version and the chokidar-cli library to detect file changes.
2. Use the build-and-sync:watch npm script, that builds the library and copies it to the client app's **node\_module** directory every time a file changes:
```bash
npm run build-and-sync:watch --target-path=${path to the client app root}
# example (the client app is demo-web):
npm run build-and-sync:watch --target-path=../../demo-web
```
### Running the tests
`ng test`
#### Coding style tests
Always follow the Angular official [coding styles](https://angular.io/guide/styleguide).
Below you will find a Storybook which will demonstrate how components behave under different states, props, and conditions, it allows you to preview and interact with individual UI components in isolation, without the need for a full-fledged application:
# iOS SDK
# Using the iOS Renderer
## iOS Project Requirements
The minimum requirements are:
* iOS 15
* Swift 5.7
## Installing the library
The iOS Renderer is available through Cocoapods.
### Cocoapods
#### Prerequisites
* Cocoapods gem installed
#### Cocoapods private trunk setup
Add the private trunk repo to your local Cocoapods installation with the command:
```ruby
pod repo add flowx-specs git@github.com:flowx-ai-external/flowx-ios-specs.git
```
#### Adding the dependency
Add the source of the private repository in the Podfile
```ruby
source 'git@github.com:flowx-ai-external/flowx-ios-specs.git'
```
Add a post install hook in the Podfile setting `BUILD_LIBRARY_FOR_DISTRIBUTION` to `YES`.
```ruby
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['BUILD_LIBRARY_FOR_DISTRIBUTION'] = 'YES'
end
end
end
```
Add the pod and then run `pod install`
```ruby
pod 'FlowX'
```
Example
```ruby
source 'https://github.com/flowx-ai-external/flowx-ios-specs.git'
source 'https://github.com/CocoaPods/Specs.git'
post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['BUILD_LIBRARY_FOR_DISTRIBUTION'] = 'YES'
end
end
end
target 'AppTemplate' do
# Comment the next line if you don't want to use dynamic frameworks
use_frameworks!
# Pods for AppTemplate
pod 'FlowXRenderer'
target 'AppTemplateTests' do
inherit! :search_paths
# Pods for testing
end
target 'AppTemplateUITests' do
# Pods for testing
end
end
```
### Library dependencies
* Alamofire
* SDWebImageSwiftUI
* SDWebImageSVGCoder
## Configuring the library
The SDK has 2 configurations, available through shared instances: `FXConfig` which holds general purpose properties, and `FXSessionConfig` which holds user session properties.
It is recommended to call the `FXConfig` configuration method at app launch.
Call the FXSessionConfig configure method after the user logs in and a valid user session is available.
### FXConfig
This config is used for general purpose properties.
#### Properties
| Name | Description | Type | Requirement |
| ------------ | ------------------------------------------------------------------- | ------ | ------------------------------ |
| baseURL | The base URL used for REST networking | String | Mandatory |
| enginePath | The process engine url path component | String | Mandatory |
| imageBaseURL | The base URL used for static assets | String | Mandatory |
| locale | The locale used for localization | String | Mandatory. Defaults to "en-us" |
| language | The language used for retrieving enumerations and substitution tags | String | Mandatory. Defaults to "en" |
| logLevel | Enum value indicating the log level logging. Default is none | Bool | Optional |
**Sample**
```swift
FXConfig.sharedInstance.configure { (config) in
config.baseURL = myBaseURL
config.enginePath = "engine"
config.imageBaseURL = myImageBaseURL
config.locale = "en-us"
config.language = "en"
config.logLevel = .verbose
}
```
#### Changing the current language
The current language and locale can be changed after the initial configuration, by calling the `changeLocaleSettings` method:
```swift
FXConfig.sharedInstance.changeLocaleSettings(locale: "ro-ro",
language: "en")
```
### FXSessionConfig
This config is used for providing networking or auth session-specific properties.
The library expects either the JWT access token or an Alamofire Session instance managed by the container app. In case a session object is provided, the request adapting should be handled by the container app.
#### Properties
| Name | Description | Type |
| -------------- | --------------------------------------------------- | ------- |
| sessionManager | Alamofire session instance used for REST networking | Session |
| token | JWT authentication access token | String |
#### Sample for access token
```swift
...
func configureFlowXSession() {
FXSessionConfig.sharedInstance.configure { config in
config.token = myAccessToken
}
}
```
#### Sample for session
```swift
import Alamofire
```
```swift
...
func configureFlowXSession() {
FXSessionConfig.sharedInstance.configure { config in
config.sessionManager = Session(interceptor: MyRequestInterceptor())
}
}
```
```swift
class MyRequestInterceptor: RequestInterceptor {
func adapt(_ urlRequest: URLRequest, for session: Session, completion: @escaping (Swift.Result) -> Void) {
var urlRequest = urlRequest
urlRequest.setValue("Bearer " + accessToken, forHTTPHeaderField: "Authorization")
completion(.success(urlRequest))
}
}
```
### Theming
Make sure the `FXSessionConfig` configure method was called with a valid session before setting up the theme.
Before starting or resuming a process, the theme setup API should be called.
The start or continue process APIs should be called only after the theme setup was completed.
### Theme setup
The setup theme is called using the shared instance of `FXTheme`
```swift
public func setupTheme(withUuid uuid: String,
localFileUrl: URL? = nil,
completion: (() -> Void)?)
```
* `uuid` - the UUID of the theme configured in the FlowX Designer.
* `localFileUrl` - optional parameter for providing a fallback theme file, in case the fetch theme request fails.
* `completion` - a completion closure called when the theme setup finishes.
In addition to the `completion` parameter, FXTheme's shared instance also provides a Combine publisher named `themeFetched` which sends `true` if the theme setup was finished.
#### Sample
```swift
FXTheme.sharedInstance.setupTheme(withUuid: myThemeUuid,
localFileUrl: Bundle.main.url(forResource: "theme", withExtension: "json"),
completion: {
print("theme setup finished")
})
```
```swift
...
var subscription: AnyCancellable?
func myMethodForThemeSetupFinished() {
subscription = FXTheme.sharedInstance.themeFetched.sink { result in
if result {
DispatchQueue.main.async {
// you can now start/continue a process
}
}
}
}
}
...
```
## Using the library
### Public API
The library's public APIs described in this section are called using the shared instance of FlowX, `FlowX.sharedInstance`.
### Check renderer compatibility
Before using the iOS SDK, it is recommended to check the compatibility between the renderer and the deployed FlowX services.
This can be done by calling the `checkRendererVersion` which has a completion handler containing a Bool value.
```swift
FlowX.sharedInstance.checkRendererVersion { compatible in
print(compatible)
}
```
### How to start and end FlowX session
After all the configurations are set, you can start a FlowX session by calling the `startSession()` method.
This is optional, as the session starts lazily when the first process is started.
`FlowX.sharedInstance.startSession()`
When you want to end a FlowX session, you can call the `endSession()` method. This also does a complete clean-up of the started processes.
You might want to use this method in a variety of scenarios, for instance when the user logs out.
`FlowX.sharedInstance.endSession()`
### How to start a process
There are 3 methods available for starting a FlowX process.
The container app is responsible with presenting the navigation controller or tab controller holding the process navigation.
1. Start a process which renders inside an instance of `UINavigationController` or `UITabBarController`, depending on the BPMN diagram of the process.
The controller to be presented will be provided inside the `completion` closure parameter of the method.
Use this method when you want the process to be rendered inside a controller themed using the FlowX Theme defined in the FlowX Designer.
```swift
public func startProcess(applicationUuid: String,
name: String,
params: [String: Any]?,
isModal: Bool = false,
showLoader: Bool = false,
completion: ((UIViewController?) -> Void)?,
onProcessEnded: (() -> Void)? = nil)
```
* `applicationUuid` - the uuid of the application
* `name` - the name of the process
* `params` - the start parameters, if any
* `isModal` - a boolean indicating whether the process navigation is modally displayed. When the process navigation is displayed modally, a close bar button item is displayed on each screen displayed throughout the process navigation.
* `showLoader` - a boolean indicating whether the loader should be displayed when starting the process.
* `completion` - a completion closure which passes either an instance of `UINavigationController` or `UITabBarController` to be presented.
* `onProcessEnded` - a closure called when the process ends. The closure is strongly referenced inside the SDK. Avoid reference cycles by using \[weak self]
2. Start a process which renders inside a provided instance of a `UINavigationController`.
Use this method when you want the process to be rendered inside a custom instance of `UINavigationController`. Optionally you can pass an instance of `FXNavigationViewController`, which has the appearance set in the FlowX Theme, using the `FXNavigationViewController`s class func `FXNavigationViewController.navigationController()`.
If you use this method, make sure that the process does not use a tab controller as root view.
```swift
public func startProcess(navigationController: UINavigationController,
applicationUuid: String,
name: String,
params: [String: Any]?,
isModal: Bool = false,
showLoader: Bool = false,
onProcessEnded: (() -> Void)? = nil)
```
* `navigationController` - the instance of UINavigationController which will hold the process navigation stack
* `applicationUuid` - the uuid of the application
* `name` - the name of the process
* `params` - the start parameters, if any
* `isModal` - a boolean indicating whether the process navigation is modally displayed. When the process navigation is displayed modally, a close bar button item is displayed on each screen displayed throughout the process navigation.
* `showLoader` - a boolean indicating whether the loader should be displayed when starting the process.
* `onProcessEnded` - a closure called when the process ends. The closure is strongly referenced inside the SDK. Avoid reference cycles by using \[weak self]
3. Start a process which renders inside a provided instance of a `UITabBarController`.
Use this method when you want the process to be rendered inside a custom instance of `UITabBarController`.If you use this method, make sure that the process has a tab controller as root view.
```swift
public func startProcess(tabBarController: UITabBarController,
applicationUuid: String,
name: String,
params: [String: Any]?,
isModal: Bool = false,
showLoader: Bool = false,
onProcessEnded: (() -> Void)? = nil)
```
* `tabBarController` - the instance of UITabBarController which will hold the process navigation
* `applicationUuid` - the uuid of the application
* `name` - the name of the process
* `params` - the start parameters, if any
* `isModal` - a boolean indicating whether the process navigation is modally displayed. When the process navigation is displayed modally, a close bar button item is displayed on each screen displayed throughout the process navigation.
* `showLoader` - a boolean indicating whether the loader should be displayed when starting the process.
* `onProcessEnded` - a closure called when the process ends. The closure is strongly referenced inside the SDK. Avoid reference cycles by using \[weak self]
#### Sample
```swift
FlowX.sharedInstance.startProcess(applicationUuid: applicationUuid,
name: processName,
params: [:],
isModal: true,
showLoader: true) { processRootViewController in
if let processRootViewController = processRootViewController {
processRootViewController.modalPresentationStyle = .overFullScreen
self.present(processRootViewController, animated: false)
}
} onProcessEnded: { [weak self] in
//TODO
}
```
or
```swift
FlowX.sharedInstance.startProcess(navigationController: processNavigationController,
applicationUuid: applicationUuid,
name: processName,
params: startParams,
isModal: true
showLoader: true)
self.present(processNavigationController, animated: true, completion: nil)
```
or
```swift
FlowX.sharedInstance.startProcess(tabBarController: processTabController,
applicationUuid: applicationUuid,
name: processName,
params: startParams,
isModal: true
showLoader: true)
self.present(processTabController, animated: true, completion: nil)
```
### How to resume an existing process
There are 3 methods available for resuming a FlowX process.
The container app is responsible with presenting the navigation controller or tab controller holding the process navigation.
1. Continue a process which renders inside an instance of `UINavigationController` or `UITabBarController`, depending on the BPMN diagram of the process.
The controller to be presented will be provided inside the `completion` closure parameter of the method.
Use this method when you want the process to be rendered inside a controller themed using the FlowX Theme defined in the FlowX Designer.
```swift
public func continueExistingProcess(uuid: String,
name: String,
isModal: Bool = false,
completion: ((UIViewController?) -> Void)? = nil,
onProcessEnded: (() -> Void)? = nil)
```
* `name` - the name of the process
* `isModal` - a boolean indicating whether the process navigation is modally displayed. When the process navigation is displayed modally, a close bar button item is displayed on each screen displayed throughout the process navigation.
* `showLoader` - a boolean indicating whether the loader should be displayed when starting the process.
* `completion` - a completion closure which passes either an instance of `UINavigationController` or `UITabBarController` to be presented.
* `onProcessEnded` - a closure called when the process ends. The closure is strongly referenced inside the SDK. Avoid reference cycles by using \[weak self]
2. Continue a process which renders inside a provided instance of a `UINavigationController`.
Use this method when you want the process to be rendered inside a custom instance of `UINavigationController`. Optionally you can pass an instance of `FXNavigationViewController`, which has the appearance set in the FlowX Theme, using the `FXNavigationViewController`s class func `FXNavigationViewController.navigationController()`.
If you use this method, make sure that the process does not use a tab controller as root view.
```swift
public func continueExistingProcess(uuid: String,
name: String,
navigationController: UINavigationController,
isModal: Bool = false,
onProcessEnded: (() -> Void)? = nil)
```
* `uuid` - the UUID string of the process
* `name` - the name of the process
* `navigationController` - the instance of UINavigationController which will hold the process navigation stack
* `isModal` - a boolean indicating whether the process navigation is modally displayed. When the process navigation is displayed modally, a close bar button item is displayed on each screen displayed throughout the process navigation.
* `onProcessEnded` - a closure called when the process ends. The closure is strongly referenced inside the SDK. Avoid reference cycles by using \[weak self]
3. Continue a process which renders inside a provided instance of a `UITabBarController`.
Use this method when you want the process to be rendered inside a custom instance of `UITabBarController`.If you use this method, make sure that the process has a tab controller as root view.
```swift
public func continueExistingProcess(uuid: String,
name: String,
tabBarController: UITabBarController,
isModal: Bool = false,
onProcessEnded: (() -> Void)? = nil)
```
* `uuid` - the UUID string of the process
* `name` - the name of the process
* `tabBarController` - the instance of UITabBarController which will hold the process navigation
* `isModal` - a boolean indicating whether the process navigation is modally displayed. When the process navigation is displayed modally, a close bar button item is displayed on each screen displayed throughout the process navigation.
* `onProcessEnded` - a closure called when the process ends. The closure is strongly referenced inside the SDK. Avoid reference cycles by using \[weak self]
#### Sample
```swift
FlowX.sharedInstance.continueExistingProcess(uuid: uuid,
name: processName,
isModal: true) { processRootViewController in
if let processRootViewController = processRootViewController {
processRootViewController.modalPresentationStyle = .overFullScreen
self.present(processRootViewController, animated: true)
}
} onProcessEnded: { [weak self] in
}
```
or
```swift
FlowX.sharedInstance.continueExistingProcess(uuid: uuid,
name: processName,
navigationController: processNavigationController,
isModal: true)
processNavigationController.modalPresentationStyle = .overFullScreen
self.present(processNavigationController, animated: true, completion: nil)
```
or
```swift
FlowX.sharedInstance.continueExistingProcess(uuid: uuid,
name: processName,
tabBarController: processTabBarController,
isModal: false)
processTabBarController.modalPresentationStyle = .overFullScreen
self.present(processTabBarController, animated: true, completion: nil)
```
### How to end a process
You can manually end a process by calling the `stopProcess(name: String)` method.
This is useful when you want to explicitly ask the FlowX shared instance to clean up the instance of the process sent as parameter.
For example, it could be used for modally displayed processes that are dismissed by the user, in which case the `dismissRequested(forProcess process: String, navigationController: UINavigationController)` method of the FXDataSource will be called.
#### Sample
```swift
FlowX.sharedInstance.stopProcess(name: processName)
```
### FXDataSource
The library offers a way of communication with the container app through the `FXDataSource` protocol.
The data source is a public property of FlowX shared instance.
`public weak var dataSource: FXDataSource?`
```swift
public protocol FXDataSource: AnyObject {
func controllerFor(componentIdentifier: String) -> FXController?
func viewFor(componentIdentifier: String) -> FXView?
func viewFor(componentIdentifier: String, customComponentViewModel: FXCustomComponentViewModel) -> AnyView?
func navigationController() -> UINavigationController?
func errorReceivedForAction(name: String?)
func validate(validatorName: String, value: String) -> Bool
func dismissRequested(forProcess process: String, navigationController: UINavigationController)
func viewForStepperHeader(stepViewModel: StepViewModel) -> AnyView?
}
```
* `func controllerFor(componentIdentifier: String) -> FXController?`
This method is used for providing a custom component using UIKit UIViewController, identified by the componentIdentifier argument.
* `func viewFor(componentIdentifier: String) -> FXView?`
This method is used for providing a custom component using UIKit UIView, identified by the componentIdentifier argument.
* `func viewFor(componentIdentifier: String, customComponentViewModel: FXCustomComponentViewModel) -> AnyView?`
This method is used for providing a custom component using SwiftUI View, identified by the componentIdentifier argument.
A view model is provided as an ObservableObject to be added as @ObservedObject inside the SwiftUI view for component data observation.
* `func navigationController() -> UINavigationController?`
This method is used for providing a navigation controller. It can be either a custom `UINavigationController` class, or just a regular `UINavigationController` instance themed by the container app.
* `func errorReceivedForAction(name: String?)`
This method is called when an error occurs after an action is executed.
* `func validate(validatorName: String, value: String) -> Bool`
This method is used for custom validators. It provides the name of the validator and the value to be validated. The method returns a boolean indicating whether the value is valid or not.
* `func dismissRequested(forProcess process: String, navigationController: UINavigationController)`
This method is called, on a modally displayed process navigation, when the user attempts to dismiss the modal navigation. Typically it is used when you want to present a confirmation pop-up.
The container app is responsible with dismissing the UI and calling the stop process APIs.
* `func viewForStepperHeader(stepViewModel: StepViewModel) -> AnyView?`
This method is used for providing a custom SwiftUI view for the stepper navigation header.
#### Sample
```swift
class MyFXDataSource: FXDataSource {
func controllerFor(componentIdentifier: String) -> FXController? {
switch componentIdentifier {
case "customComponent1":
let customComponent: CustomViewController = viewController()
return customComponent
default:
return nil
}
}
func viewFor(componentIdentifier: String) -> FXView? {
switch componentIdentifier {
case "customComponent2":
return CustomView()
default:
return nil
}
}
func viewFor(componentIdentifier: String, customComponentViewModel: FXCustomComponentViewModel) -> AnyView? {
switch componentIdentifier {
case "customComponent2":
return AnyView(SUICustomView(viewModel: customComponentViewModel))
default:
return nil
}
}
func navigationController() -> UINavigationController? {
nil
}
func errorReceivedForAction(name: String?) {
}
func validate(validatorName: String, value: Any) -> Bool {
switch validatorName {
case "myCustomValidator":
let myCustomValidator = MyCustomValidator(input: value as? String)
return myCustomValidator.isValid()
default:
return true
}
}
func dismissRequested(forProcess process: String, navigationController: UINavigationController) {
navigationController.dismiss(animated: true, completion: nil)
FlowX.sharedInstance.stopProcess(name: process)
}
func viewForStepperHeader(stepViewModel: StepViewModel) -> AnyView? {
return AnyView(CustomStepperHeaderView(stepViewModel: stepViewModel))
}
}
```
### Custom components
#### FXController
FXController is an open class subclassing UIViewController, which helps the container app provide full custom screens the renderer.
It needs to be subclassed for each custom screen.
Use this only when the custom component configured in the UI Designer is the root component of the User Task node.
```swift
open class FXController: UIViewController {
internal(set) public var data: Any?
internal(set) public var actions: [ProcessActionModel]?
open func titleForScreen() -> String? {
return nil
}
open func populateUI() {
}
open func updateUI() {
}
}
```
* `internal(set) public var data: Any?`
`data` is the property, containing the data model for the custom component. The type is Any, as it could be a primitive value, a dictionary or an array, depending on the component configuration.
* `internal(set) public var actions: [ProcessActionModel]?`
`actions` is the array of actions provided to the custom component.
* `func titleForScreen() -> String?`
This method is used for setting the screen title. It is called by the renderer when the view controller is displayed.
* `func populateUI()`
This method is called by the renderer, after the controller has been presented, when the data is available.
This will happen asynchronously. It is the container app's responsibility to make sure that the initial state of the view controller does not have default/residual values displayed.
* `func updateUI()`
This method is called by the renderer when an already displayed view controller needs to update the data shown.
#### FXView
FXView is a protocol that helps the container app provide custom UIKit subviews to the renderer. It needs to be implemented by `UIView` instances. Similar to `FXController` it has data and actions properties and a populate method.
```swift
public protocol FXView: UIView {
var data: Any? { get set }
var actions: [ProcessActionModel]? { get set }
func populateUI()
}
```
* `var data: [String: Any]?`
`data` is the property, containing the data model for the custom view. The type is Any, as it could be a primitive value, a dictionary or an array, depending on the component configuration.
* `var actions: [ProcessActionModel]?`
`actions` is the array of actions provided to the custom view.
* `func populateUI()`
This method is called by the renderer after the screen containing the view has been displayed.
It is the container app's responsibility to make sure that the initial state of the view does not have default/residual values displayed.
It is mandatory for views implementing the FXView protocol to provide the intrinsic content size.
```swift
override var intrinsicContentSize: CGSize {
return CGSize(width: UIScreen.main.bounds.width, height: 100)
}
```
#### SwiftUI Custom components
Custom SwiftUI components can be provided as type-erased views.
`FXCustomComponentViewModel` is a class implementing the `ObservableObject` protocol. It is used for managing the state of custom SwiftUI views.
It has two published properties, for data and actions.
```swift
@Published public var data: Any?
@Published public var actions: [ProcessActionModel] = []
```
Example
```swift
struct SampleView: View {
@ObservedObject var viewModel: FXCustomComponentViewModel
var body: some View {
Text("Lorem")
}
}
```
### Custom header view for Stepper navigation
The container application can provide a custom view that will be used as the stepper navigation header, using the `FXDataSource` protocol method `viewForStepperHeader`.
The method has a parameter, which provides the data needed for populating the view's UI.
```swift
public struct StepViewModel {
// title for the current step; optional
public var stepTitle: String?
// title for the current substep, if there is a stepper in stepper configured; optional
public var substepTitle: String?
// 1-based index of the current step
public var step: Int
// total number of steps
public var totalSteps: Int
// 1-based index of the current substep, if there is a stepper in stepper configured; optional
public var substep: Int?
// total number of substeps in the current step, if there is a stepper in stepper configured; optional
public var totalSubsteps: Int?
}
```
#### Sample
```swift
struct CustomStepperHeaderView: View {
let viewModel: StepViewModel
var body: some View {
VStack(spacing: 16) {
ProgressView(value: Float(stepViewModel.step) / Float(stepViewModel.totalSteps))
.foregroundStyle(Color.blue)
if let stepTitle = stepViewModel.stepTitle {
Text(stepTitle)
}
if let substepTitle = stepViewModel.substepTitle {
Text(substepTitle)
}
}
.background(Color.white)
.shadow(radius: 10)
}
}
```
### How to run an action from a custom component
The custom components which the container app provides will contain FlowX actions to be executed. In order to run an action you need to call the following method:
```swift
public func runAction(action: ProcessActionModel,
params: [String: Any]? = nil)
```
`action` - the `ProcessActionModel` action object
`params` - the parameters for the action
### How to run an upload action from a custom component
```swift
public func runUploadAction(action: ProcessActionModel,
image: UIImage)
```
`action` - the `ProcessActionModel` action object
`image` - the image to upload
```swift
public func runUploadAction(action: ProcessActionModel,
fileURL: URL)
```
`action` - the `ProcessActionModel` action object
`fileURL` - the local URL of the image
### Getting a substitution tag value by key
```swift
public func getTag(withKey key: String) -> String?
```
All substitution tags will be retrieved by the SDK before starting the first process and will be stored in memory.
Whenever the container app needs a substitution tag value for populating the UI of the custom components, it can request the substitution tag using the method above, providing the key.
### Getting a media item url by key
```swift
public func getMediaItemURL(withKey key: String) -> String?
```
All media items will be retrieved by the SDK before starting the first process and will be stored in memory.
Whenever the container app needs a media item url for populating the UI of the custom components, it can request the url using the method above, providing the key.
### Handling authorization token changes
When the access token of the auth session changes, you can update it in the renderer using the `func updateAuthorization(token: String)` method.
```swift
FlowX.sharedInstance.updateAuthorization(token: accessToken)
```
# React SDK
The FlowxProcessRenderer is a low code library designed to render UI configured via the Flowx Process Editor.
## React project requirements
Your app MUST use SCSS for styling.
To install the npm libraries provided by FLOWX you will need to obtain access to the private FLOWX Nexus registry. Please consult with your project DevOps.
The library uses React version **react\~18**, **npm v10.8.0** and **node v18.16.9**.
## Installing the library
Use the following command to install the **renderer** library and its required dependencies:
Installing `react` and `react-dom` can be skipped if you already have them installed in your project.
```bash
npm install \
react@18 \
react-dom@18 \
@flowx/core-sdk@ \
@flowx/core-theme@ \
@flowx/react-sdk@ \
@flowx/react-theme@ \
@flowx/react-ui-toolkit@ \
air-datepicker@3 \
axios \
ag-grid-react@32
```
Make sure to replace `` with the version corresponding to the platform version that you are using.
## Initial setup
Once installed, `FlxProcessRenderer` will be imported in the from the `@flowx/react-sdk` package.
### Theming
Component theming is done through the `@flowx/react-theme` library. The theme id is a required input for the renderer SDK component and is used to fetch the theme configuration. The id can be obtained from the admin panel in the themes section.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/2024-04-08%2013.45.10.gif)
### Authorization
It's the responsibility of the client app to implement the authorization flow (using the **OpenID Connect** standard). The renderer SDK will expect the authToken to be passed to the `FlxProcessRenderer` as an input.
```typescript.tsx
import { FlxProcessRenderer } from '@flowx/react-sdk';
export function MyFlxContainer() {
return
}
```
The `FlxProcessRenderer` component is required in the application module where the process will be rendered. The component accepts a props where you can pass extra config info, register a **custom component** or **custom validators**.
**Custom components** will be referenced by name when creating the template config for a user task.
**Custom validators** will be referenced by name (`customValidator`) in the template config panel in the validators section of each generated form field.
```typescript.tsx
import { FlxProcessRenderer } from '@flowx/react-sdk';
export function MyFlxContainer() {
return (v: string) => v === '4.5'}}
staticAssetsPath={...}
locale="en-US"
language="en"
appInfo={{
appId: ...
}}
buildId={...}
/>
}
```
The entry point of the library is the `` component. A list of accepted inputs is found below:
```
```
**Parameters**:
| Name | Description | Type | Mandatory | Default value | Example |
| ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | --------- | ------------- | ------------------------------------------------ |
| apiUrl | Your base url | string | true | - | [https://yourDomain.dev](https://yourdomain.dev) |
| processApiPath | Process subpath | string | true | - | onboarding |
| authToken | Authorization token | string | true | - | 'eyJhbGciOiJSUzI1NiIsIn....' |
| themeId | Theme id used to style the process. Can be obtained from the themes section in the admin | string | true | - | '123-456-789' |
| processName | Identifies a process | string | true | - | client\_identification |
| processStartData | Data required to start the process | json | true | - | `{ "firstName": "John", "lastName": "Smith"}` |
| language | Language used to localize the enumerations inside the application. | string | false | ro | - |
| isDraft | When true allows starting a process in draft state. \*Note that isDraft = true requires that processName be the **id** (number) of the process and NOT the name. | boolean | false | false | - |
| locale | Defines the locale of the process, used to apply date, currency and number formatting to data model values | boolean | false | ro-RO | - |
| locale | Defines the locale of the process, used to apply date, currency and number formatting to data model values | boolean | false | ro-RO | - |
| appInfo | Defines which FlowX Application will be run inside the process renderer. | json | true | - | `{ "appId": "111111-222222-333333-44444"}` |
| buildId | Defines which FlowX Application build will be run inside the process renderer. Can be used for version controlling the processes. | json | true | - | "111111-222222-333333-44444" |
## Starting a process
### Prerequisites
* **Process Name**: You need to know the name of the process you want to start. This name is used to identify the process in the system.
* **FlowX Application UUID**: You need the UUID of the FlowX Application that contains the process you want to start. This UUID is used to identify the application in the system.
* **Locale**: You can specify the locale of the process to apply date, currency, and number formatting to data model values.
* **Language**: You can specify the language used to localize the enumerations inside the application.
### Getting the application UUID
The application UUID can be obtained from the FlowX Dashboard. Navigate to the Applications section and select the application you want to start a process in. The UUID can be copied from the application actions popover.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/react/copy_uuid_application.png)
### Getting the process name
The process name can be obtained from the FlowX Designer. Navigate to the process you want to start and copy the process name from the breadcrumbs.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/react/copy_process_name.png)
### Initializing the process renderer
To start a process, you need to initialize the `FlxProcessRenderer` component in your application. The component accepts various props that define the process to start, the theme to use, and other configuration options.
```typescript.tsx
import { FlxProcessRenderer } from '@flowx/react-sdk';
export function MyFlxContainer() {
return
}
```
## Custom components
Custom components will be hydrated with data through the data input prop which must be defined in the custom component.
Custom components will be provided through the `components` parameter to the `` component.
The object keys passed in the `components` prop **MUST** match the custom component names defined in the FlowX process.
Component data defined through an `inputKey` is available under `data` -> `data`
Component actions are always found under `data` -> `actionsFn` key.
```typescript.tsx
export const MyCustomComponent = ( {data }) => {...}
```
```typescript
# data object example
data: {
data: {
input1: ''
},
actionsFn: {
action_one: () => void;
action_two: () => void; }
}
```
To add a custom component in the template config tree, we need to know its unique identifier and the data it should receive from the process model.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/ui_designer_custom.png)
The properties that can be configured are as follows:
* **Identifier** - This enables the custom component to be displayed within the component hierarchy and determines the actions available for the component.
* **Input keys** - These are used to specify the pathway to the process data that components will utilize to receive their information.
* [**UI Actions**](../../ui-actions) - actions defined here will be made available to the custom component
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/building-blocks/ui-designer/ui_designer_custom_settings.png#center)
### Prerequisites (before creation)
* **React Knowledge**: You should have a good understanding of React, as custom components are created and imported using React.
* **Development Environment**: Set up a development environment for React development, including Node.js and npm (Node Package Manager).
* **Component Identifier**: You need a unique identifier for your custom component. This identifier is used for referencing the component within the application.
### Creating a custom component
To create a Custom Component in React, follow these steps:
1. Create a new React component.
2. Implement the necessary HTML structure, TypeScript logic, and SCSS styling to define the appearance and behavior of your custom component.
### Importing the component
After creating the Custom Component, you need to import it into your application.
In your `` component, add the following property:
```tsx
```
### Using the custom component
Once your Custom Component is declared, you can use it for configuration within your application.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/loader_component.gif)
### Data input and actions
The Custom Component accepts input data from processes and can also include actions extracted from a process. These inputs and actions allow you to configure and interact with the component dynamically.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/cst_input_data.png)
### Extracting data from processes
There are multiple ways to extract data from processes to use within your Custom Component. You can utilize the data provided by the process or map actions from the BPMN process to Angular actions within your component.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/cst_loader_input.png)
Make sure that the React actions that you declare match the names of the process actions.
### Styling with CSS
To apply CSS classes to UI elements within your Custom Component, you first need to identify the UI element identifiers within your component's HTML structure. Once identified, you can apply defined CSS classes to style these elements as desired.
Example:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release34/Screenshot%202023-10-10%20at%2012.29.51.png)
### Additional considerations
* **Naming Conventions**: Be consistent with naming conventions for components, identifiers, and actions. Ensure that Angular actions match the names of process actions as mentioned in the documentation.
* **Component Hierarchy**: Understand how the component fits into the overall component hierarchy of your application. This will help determine where the component is displayed and what actions are available for it.
* **Documentation and Testing**: Document your custom component thoroughly for future reference. Additionally, testing is crucial to ensure that the component behaves as expected in various scenarios.
* **Security**: If your custom component interacts with sensitive data or performs critical actions, consider security measures to protect the application from potential vulnerabilities.
* **Integration with FLOWX Designer**: Ensure that your custom component integrates seamlessly with FLOWX Designer, as it is part of the application's process modeling capabilities.
## Custom validators
You may also define custom validators in your FlowX processes and pass their implementation through the `validators` prop of the `` component.
The validators are then processed and piped through the popular [React Hook Form](https://www.react-hook-form.com/api/useform/register/) library, taking into account how the error messages are defined in your process.
A validator must have the following type:
```typescript
const customValidator = (...params: string[]) => (v: any) => boolean | Promise
```
The object keys passed in the `validators` prop **MUST** match the custom validator names defined in the FlowX process.
## Custom CSS
The renderer SDK allows you to pass custom CSS classes on any component inside the process. These classes are then applied to the component's root element.
To add a CSS custom class to a component, you need to define the class in the process designer by navigating to the styles tab of the component, expanding the Advanced accordion and writing down the CSS class.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/react/add_css_class.gif)
The classes will be applied last on the element, so they will override the classes already defined on the element.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/react/css_class_inspector.png)
# SDKs overview
FLOWX.AI provides web and native mobile SDKs. These SDKs enable developers to create applications that can be displayed in a browser, embedded in an internet banking interface, or in a mobile banking app. The SDKs automatically generate the user interface (UI) based on the business process and data points created by a business analyst, reducing the need for UX/UI expertise.
SDKs are used in the Angular, React, iOS, and Android applications to render the process screens and orchestrate the custom components.
# IAM solution
Identity and access management (IAM) is a framework of business processes, policies and technologies that facilitates the management of electronic or digital identities. With an IAM framework in place, you can control user access to critical information/components within an organization.
## What is an Identity Provider (IdP)?
The IdP, Identity-as-a-Service (IDaaS), Privileged Identity/Access Management (PIM/PAM), Multi-factor/Two-factor Authentication (MFA/2FA), and numerous other subcategories are included in the IAM category.
IdP is a subset of an IAM solution that is dedicated to handling fundamental user IDs. The IdP serves as the authoritative source for defining and confirming user identities.
The IdP can be considered maybe the most important subcategory of the IAM field because it often lays the foundation of an organization's overall identity management infrastructure. In fact, other IAM categories and solutions, such as [IDaaS](https://jumpcloud.com/blog/identity-as-a-service-idaas), PIM/PAM, MFA/2FA, and others are often layered on top of the core IdP and serve to federate core user identities from the IdP to various endpoints. Therefore, your choice in IdP will have a profound influence on your overall IAM architecture.
We recommend **Keycloak**, a component that allows you to create users and store credentials. It can be also used for authorization - defining groups, and assigning roles to users.
Every communication that comes from a consumer application, goes through a public entry point (API Gateway). To communicate with this component, the consumer application tries to start a process and the public entry point will check for authentication (Keycloak will send you a token) and the entry point validates it.
## Configuring access rights
Granular access rights can be configured for restricting access to the FLOWX.AI components and their features or to define allowed actions for each type of user. Access rights are based on user roles that need to be configured in the identity provider management solution.
To configure the roles for the users, they need to be added first to an identity provider (IdP) solution. **The access rights-related configuration needs to be set up for each microservice**. Default options are preconfigured. They can be overwritten using environment variables.
For more details you can check the next links:
For more information on how to add roles and how to configure an IdP solution, check the following section:
## Using Keycloak with an external IdP
Recommended keycloak version: **22.x**
In all cases, IdP authentication is mandatory but otherwise, all attribute mapping is configurable, including roles and groups or the entire authorization can be performed by keycloak.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_1.png)
### AD or LDAP provider
In Lightweight Directory Access Protocol (LDAP) and Active Directory, Keycloak functionality is called federation or external storage. Keycloak includes an LDAP/AD provider.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_2.png)
More details:
Configuration example:
### SAML, OpenID Connect, OAuth 2.0
Keycloak functionality is called brokering. Synchronization is performed during user login.
More details:
Configuration examples for ADFS:
# Application manager access rights
Granular access rights can be configured for restricting access to the Application-manager component.
The **Application Manager** component provides granular access rights, allowing users to perform various actions depending on their assigned roles and the configured scopes.
In order for users to view resources within the Application Manager, they must have, in addition to the appropriate `role_apps_manage_` role, at least **read access** on each [**resource**](../../docs/projects/resources).
### Available access scopes
1. **manage-applications**
* **Scopes**:
* **read**
* **Roles**:
* `ROLE_APPS_MANAGE_READ`
* `ROLE_APPS_MANAGE_IMPORT`
* `ROLE_APPS_MANAGE_EDIT`
* `ROLE_APPS_MANAGE_ADMIN`
* **edit**
* **Roles**:
* `ROLE_APPS_MANAGE_EDIT`
* `ROLE_APPS_MANAGE_ADMIN`
* **import**
* **Roles**:
* `ROLE_APPS_MANAGE_IMPORT`
* `ROLE_APPS_MANAGE_EDIT`
* `ROLE_APPS_MANAGE_ADMIN`
* **admin**
* **Roles**:
* `ROLE_APPS_MANAGE_ADMIN`
2. **manage-app-dependencies**
* **Scopes**:
* **read**
* **Roles**:
* `ROLE_APP_DEPENDENCIES_MANAGE_READ`
* `ROLE_APP_DEPENDENCIES_MANAGE_EDIT`
* `ROLE_APP_DEPENDENCIES_MANAGE_ADMIN`
* **edit**
* **Roles**:
* `ROLE_APP_DEPENDENCIES_MANAGE_EDIT`
* `ROLE_APP_DEPENDENCIES_MANAGE_ADMIN`
* **admin**
* **Roles**:
* `ROLE_APP_DEPENDENCIES_MANAGE_ADMIN`
3. **manage-builds**
* **Scopes**:
* **read**
* **Roles**:
* `ROLE_BUILDS_MANAGE_READ`
* `ROLE_BUILDS_MANAGE_EDIT`
* `ROLE_BUILDS_MANAGE_IMPORT`
* `ROLE_BUILDS_MANAGE_ADMIN`
* **edit**
* **Roles**:
* `ROLE_BUILDS_MANAGE_EDIT`
* `ROLE_BUILDS_MANAGE_ADMIN`
* **import**
* **Roles**:
* `ROLE_BUILDS_MANAGE_IMPORT`
* `ROLE_BUILDS_MANAGE_EDIT`
* `ROLE_BUILDS_MANAGE_ADMIN`
* **admin**
* **Roles**:
* `ROLE_BUILDS_MANAGE_ADMIN`
4. **manage-active-policy**
* **Scopes**:
* **read**
* **Roles**:
* `ROLE_ACTIVE_POLICY_MANAGE_READ`
* `ROLE_ACTIVE_POLICY_MANAGE_EDIT`
* **edit**
* **Roles**:
* `ROLE_ACTIVE_POLICY_MANAGE_EDIT`
5. **manage-app-configs**
* **Scopes**:
* **read**
* **Roles**:
* `ROLE_APP_CONFIG_MANAGE_READ`
* `ROLE_APP_CONFIG_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_MANAGE_EDIT`
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **edit**
* **Roles**:
* `ROLE_APP_CONFIG_MANAGE_EDIT`
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **import**
* **Roles**:
* `ROLE_APP_CONFIG_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_MANAGE_EDIT`
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **admin**
* **Roles**:
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
6. **manage-app-configs-overrides**
* **Scopes**:
* **read**
* **Roles**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_READ`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_EDIT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **import**
* **Roles**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_EDIT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **edit**
* **Roles**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_EDIT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **admin**
* **Roles**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
### Permissions explained
* **Permissions**:
* Can view Projects entry in main menu
* Add icon for Applications and Libraries sections is hidden - cannot add application or library
* Can view application or library Config view in read-only mode (for draft application versions) with action buttons hidden
* Can export application version
* **Restrictions**:
* Cannot start a draft application version
* Cannot discard changes
* Cannot create build
* Cannot create new branch
* Cannot import application version
* Cannot commit a draft application version
* Cannot merge branches
* Can view draft application version in read-only mode with buttons hidden
* **Roles allowed**:
* `ROLE_APPS_MANAGE_READ`
* `ROLE_APPS_MANAGE_IMPORT`
* `ROLE_APPS_MANAGE_EDIT`
* `ROLE_APPS_MANAGE_ADMIN`
* **Permissions**:
* Can view Projects entry in main menu
* Can create new application or library
* Can merge branches
* Can create new branch
* Can start new application version
* Can submit application version
* Cannot delete application - Delete icon in contextual menu is hidden
* **Roles allowed**:
* `ROLE_APPS_MANAGE_EDIT`
* `ROLE_APPS_MANAGE_ADMIN`
* **Permissions**:
* Can view Import Version entry on:
* Projects page
* Application versioning overlay
* Can view Export version button on application versioning overlay
* **Roles allowed**:
* `ROLE_APPS_MANAGE_IMPORT`
* `ROLE_APPS_MANAGE_EDIT`
* `ROLE_APPS_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read, edit, import
* Can delete application or library
* **Roles allowed**: `ROLE_APPS_MANAGE_ADMIN`
* **Permissions**:
* Can view Builds entry in application Runtime tab menu
* Can view Builds page
* Can view Builds content (contextual menu > Build contents)
* Cannot import build
* Projects page > Import icon > Import build is not shown
* **Roles allowed**: `ROLE_BUILDS_MANAGE_READ`
* `ROLE_BUILDS_MANAGE_EDIT`
* `ROLE_BUILDS_MANAGE_IMPORT`
* `ROLE_BUILDS_MANAGE_ADMIN`
* **Permissions**:
* Can see Create build button on Application Versioning overlay for a committed application version
* **Roles allowed**:
* `ROLE_BUILDS_MANAGE_EDIT`
* `ROLE_BUILDS_MANAGE_ADMIN`
* **Permissions**:
* Can view Builds entry in application Runtime tab menu
* Can import builds
* **Roles allowed**:
* `ROLE_BUILDS_MANAGE_EDIT`
* `ROLE_BUILDS_MANAGE_IMPORT`
* `ROLE_BUILDS_MANAGE_ADMIN`
* **Permissions**:
* Can do all of the above
* **Roles allowed**:
* `ROLE_BUILDS_MANAGE_ADMIN`
* **Permissions**:
* Can view Active policy entry in application Runtime tab menu
* Can view Active policy page in read-only mode - Fields and Save button are hidden
* **Roles allowed**:
* `ROLE_ACTIVE_POLICY_MANAGE_READ`
* `ROLE_ACTIVE_POLICY_MANAGE_EDIT`
* **Permissions**:
* All permissions under read
* Can update active policy settings - fields and save button are enabled
* **Roles allowed**: `ROLE_ACTIVE_POLICY_MANAGE_EDIT`
* **Permissions**:
* Can view Configuration parameters in Application Config View menu
* Can view Configuration parameters page in read-only mode
* **Roles allowed**:
* `ROLE_APP_CONFIG_MANAGE_READ`
* `ROLE_APP_CONFIG_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_MANAGE_EDIT`
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read
* Can import configuration parameters
* **Roles allowed**:
* `ROLE_APP_CONFIG_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_MANAGE_EDIT`
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read
* Can add/edit/delete configuration parameters
* Cannot import configuration parameters
* **Roles allowed**:
* `ROLE_APP_CONFIG_MANAGE_EDIT`
* `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **Permissions**:
* All permissions for read, edit, import
* **Roles allowed**: `ROLE_APP_CONFIG_MANAGE_ADMIN`
* **Permissions**:
* Can view Configuration parameters overrides in Application Runtime View menu
* Can view Configuration parameters overrides page in read-only mode:
* cannot add configuration param override
* cannot edit a configuration param override
* cannot delete a configuration param override
* **Roles allowed**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_READ`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_EDIT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read
* Can import configuration parameters
* **Roles allowed**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_IMPORT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_EDIT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read
* Can add/edit configuration parameters overrides
* **Roles allowed**:
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_EDIT`
* `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read, edit, import
* Can delete app config overrides
* **Roles allowed**: `ROLE_APP_CONFIG_OVERRIDES_MANAGE_ADMIN`
* **Permissions**:
* Can view Dependencies entry in Application Config view menu
* Can view Dependencies page in read-only mode
* **Roles allowed**:
* `ROLE_APP_DEPENDENCIES_MANAGE_READ`
* `ROLE_APP_DEPENDENCIES_MANAGE_EDIT`
* `ROLE_APP_DEPENDENCIES_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read
* Can add/edit dependencies
* **Roles allowed**:
* `ROLE_APP_DEPENDENCIES_MANAGE_EDIT`
* `ROLE_APP_DEPENDENCIES_MANAGE_ADMIN`
* **Permissions**:
* All permissions under read, edit
* Can delete dependency
* **Roles allowed**: `ROLE_APP_DEPENDENCIES_MANAGE_ADMIN`
### Configuring access
To define or adjust access for these roles, use the following format in your environment variables:
```plaintext
SECURITY_ACCESSAUTHORIZATIONS__SCOPES__ROLESALLOWED: NEEDED_ROLE_NAMES
```
Roles must be defined in your identity provider (e.g., Keycloak, RH-SSO, Entra or any compatible provider).
Custom roles can be configured as needed, and multiple roles can be assigned to each scope.
# Admin access rights
Granular access rights can be configured for restricting access to the Admin component.
Access authorizations are provided, each with specified access scopes:
1. **Manage-platform** - for configuring access for managing platform details
Available scopes:
* **read** - users are able to view platform status
* **admin** - users are able to force health check scan
2. **Manage-processes** - for configuring access for managing process definitions
Available scopes:
* **import** - users are able to import process definitions and process stages
* **read** - users are able to view process definitions and stages
* **edit** - users are able to edit process definitions
* **admin** - users are able to publish and delete process definitions, delete stages, edit sensitive data for process definitions
3. **Manage-configurations** - for configuring access for managing generic parameters
Available scopes:
* **import** - users are able to import generic parameters
* **read** - users are able to view generic parameters
* **edit** - users are able to edit generic parameters
* **admin** - users are able to delete generic parameters
4. **Manage-users** - for configuring access for access management
Available scopes:
* **read** - users are able to read all users, groups and roles
* **edit** - users are able to create/update any user group or roles
* **admin** - users are able to delete users, groups or roles
5. **Manage-integrations** - for configuring integrations with adapters
Available scopes:
* **import** - users are able to import integrations
* **read** - users are able to view all the integrations, scenarios and scenarios configuration(topics/ input model/ output model/ headers)
* **edit** - users are able to create/update/delete any values for integrations/scenarios and also scenarios configuration (topics/input model/ output model/ headers)
* **admin** - users are able to delete integrations/scenarios with all children
The Admin service is configured with the following default users roles for each of the access scopes mentioned above:
* **manage-platform**
* read:
* ROLE\_ADMIN\_MANAGE\_PLATFORM\_READ
* ROLE\_ADMIN\_MANAGE\_PLATFORM\_ADMIN
* admin:
* ROLE\_ADMIN\_MANAGE\_PLATFORM\_ADMIN
* **manage-processes**
* import:
* ROLE\_ADMIN\_MANAGE\_PROCESS\_IMPORT
* ROLE\_ADMIN\_MANAGE\_PROCESS\_EDIT
* ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN
* read:
* ROLE\_ADMIN\_MANAGE\_PROCESS\_READ
* ROLE\_ADMIN\_MANAGE\_PROCESS\_IMPORT
* ROLE\_ADMIN\_MANAGE\_PROCESS\_EDIT
* ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN
* edit:
* ROLE\_ADMIN\_MANAGE\_PROCESS\_EDIT
* ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN
* admin:
* ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN
* **manage-configurations**
* import:
* ROLE\_ADMIN\_MANAGE\_CONFIG\_IMPORT
* ROLE\_ADMIN\_MANAGE\_CONFIG\_EDIT
* ROLE\_ADMIN\_MANAGE\_CONFIG\_ADMIN
* read:
* ROLE\_ADMIN\_MANAGE\_CONFIG\_READ
* ROLE\_ADMIN\_MANAGE\_CONFIG\_IMPORT
* ROLE\_ADMIN\_MANAGE\_CONFIG\_EDIT
* ROLE\_ADMIN\_MANAGE\_CONFIG\_ADMIN
* edit:
* ROLE\_ADMIN\_MANAGE\_CONFIG\_EDIT
* ROLE\_ADMIN\_MANAGE\_CONFIG\_ADMIN
* admin:
* ROLE\_ADMIN\_MANAGE\_CONFIG\_ADMIN
* **manage-users**
* read:
* ROLE\_ADMIN\_MANAGE\_USERS\_READ
* ROLE\_ADMIN\_MANAGE\_USERS\_EDIT
* ROLE\_ADMIN\_MANAGE\_USERS\_ADMIN
* edit:
* ROLE\_ADMIN\_MANAGE\_USERS\_EDIT
* ROLE\_ADMIN\_MANAGE\_USERS\_ADMIN
* admin:
* ROLE\_ADMIN\_MANAGE\_USERS\_ADMIN
* **manage-integrations**
* import:
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_IMPORT
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_EDIT
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_ADMIN
* read:
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_READ
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_IMPORT
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_EDIT
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_ADMIN
* edit:
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_EDIT
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_ADMIN
* admin:
* ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_ADMIN
These roles need to be defined in the chosen identity provider solution. It can be either kyecloak, RH-SSO, or other identity provider solution.
In case other custom roles are needed, you can configure them using environment variables. More than one role can be set for each access scope.
To configure access for each of the roles above, adapt the following input:
**`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED: NEEDED_ROLE_NAMES`**
Possible values for `AUTHORIZATIONNAME`: `MANAGEPLATFORM`, `MANAGEPROCESSES`, `MANAGECONFIGURATIONS`, `MANAGEUSERS`.
Possible values for `SCOPENAME`: import, read, edit, admin.
For example, if you need to configure role access for read, insert this:
```
SECURITY_ACCESSAUTHORIZATIONS_MANAGEPROCESSES_SCOPES_READ_ROLESALLOWED: ROLE_NAME_TEST
```
# Integration Designer access rights
Granular access rights can be configured to restrict access to the Integration Designer.
Access authorizations in Integration Designer are provided with specified access scopes for both system and workflow management:
1. **Manage-systems** - for configuring access to integration systems.
**Available scopes:**
* **import** - allows users to import integration systems.
* **read** - allows users to view integration systems.
* **edit** - allows users to edit integration systems.
* **admin** - allows users to administer integration systems.
2. **Manage-workflows** - for configuring access to integration workflows.
**Available scopes:**
* **import** - allows users to import integration workflows.
* **read\_restricted** - allows users to view restricted integration workflows.
* **read** - allows users to view all integration workflows.
* **edit** - allows users to edit integration workflows.
* **admin** - allows users to administer integration workflows.
### Default Roles for Integration Designer
The Integration Designer service is configured with the following default user roles for each access scope mentioned above:
* **manage-systems**
* import:
* `ROLE_INTEGRATION_SYSTEM_IMPORT`
* `ROLE_INTEGRATION_SYSTEM_EDIT`
* `ROLE_INTEGRATION_SYSTEM_ADMIN`
* read:
* `ROLE_INTEGRATION_SYSTEM_READ`
* `ROLE_INTEGRATION_SYSTEM_EDIT`
* `ROLE_INTEGRATION_SYSTEM_ADMIN`
* edit:
* `ROLE_INTEGRATION_SYSTEM_EDIT`
* `ROLE_INTEGRATION_SYSTEM_ADMIN`
* admin:
* `ROLE_INTEGRATION_SYSTEM_ADMIN`
* **manage-workflows**
* import:
* `ROLE_INTEGRATION_WORKFLOW_IMPORT`
* `ROLE_INTEGRATION_WORKFLOW_EDIT`
* `ROLE_INTEGRATION_WORKFLOW_ADMIN`
* read\_restricted:
* `ROLE_INTEGRATION_WORKFLOW_READ_RESTRICTED`
* `ROLE_INTEGRATION_WORKFLOW_READ`
* `ROLE_INTEGRATION_WORKFLOW_EDIT`
* `ROLE_INTEGRATION_WORKFLOW_ADMIN`
* read:
* `ROLE_INTEGRATION_WORKFLOW_READ`
* `ROLE_INTEGRATION_WORKFLOW_EDIT`
* `ROLE_INTEGRATION_WORKFLOW_ADMIN`
* edit:
* `ROLE_INTEGRATION_WORKFLOW_EDIT`
* `ROLE_INTEGRATION_WORKFLOW_ADMIN`
* admin:
* `ROLE_INTEGRATION_WORKFLOW_ADMIN`
> **Warning:** These roles must be defined in the selected identity provider, such as Keycloak, Red Hat Single Sign-On (RH-SSO), or another compatible identity provider.
### Customizing Access Roles
In cases where additional custom roles are required, you can configure them using environment variables. Multiple roles can be assigned to each access scope as needed.
**Environment Variable Format:**
To configure access for each role, use the following format:
**`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED: NEEDED_ROLE_NAMES`**
* **Possible values for `AUTHORIZATIONNAME`:** `MANAGE_SYSTEMS`, `MANAGE_WORKFLOWS`.
* **Possible values for `SCOPENAME`:** `import`, `read`, `read_restricted`, `edit`, `admin`.
For example, to configure a custom role with read access to manage systems, use:
```plaintext
SECURITY_ACCESSAUTHORIZATIONS_MANAGE_SYSTEMS_SCOPES_READ_ROLESALLOWED: ROLE_CUSTOM_SYSTEM_READ
```
# Configuring an IAM solution (Keycloak)
This guide provides step-by-step instructions for configuring a minimal Keycloak setup to manage users, roles, and applications efficiently.
We will walk you through configuring a minimal Keycloak setup to efficiently manage users, roles, and applications. Keycloak is an open-source Identity and Access Management (IAM) solution that makes it easy to secure applications and services with little to no coding.
## Prerequisites
Before you begin, ensure you have the following:
* Keycloak installed
* Administrative access to the Keycloak server
* Basic understanding of IAM concepts
Recommended keycloak version: **22.x**
## Recommended Keycloak setup
To configure a minimal required Keycloak setup, in this guide we will covere the following steps:
Define available roles and realm-level roles assigned to new users.
Configure the client authentication, valid redirect URIs, and enable the necessary flows.
Set up **admin**, **task management**, **process engine** and **scheduler** service accounts.
Before starting, if you need further information or a broader understanding of Keycloak, refer to the official Keycloak documentation:
## Creating a new realm
A realm is a space where you manage objects, including users, applications, roles, and groups. Creating a new realm is the first step in setting up Keycloak. Follow the steps below to create a new realm in Keycloak:
Log in to the Keycloak Admin Console using the appropriate URL for your environment (e.g., QA, development, production).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/iam1.png)
In the top left corner dropdown menu, click **Create Realm**. If you are logged in to the master realm, this dropdown menu lists all the realms created.
If you are logged in to the master realm, this dropdown menu lists all the realms created.
Enter a realm name and click **Create**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_3.png)
Configure the **Realm Settings**, such as SSO Session Idle and Access Token Lifespan, according to your organization's needs:
**Sessions** -> **SSO Session idle**: Set to **30 Minutes** (recommended).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_4.png)
**Tokens** -> **Access Token Lifespan**: Set to **30 Minutes** (recommended).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_5.png)
**Common Pitfalls**:
* Ensure that the realm name is unique within your Keycloak instance.
* Double-check session idle and token lifespan settings to align with your security requirements.
## Creating/importing user groups and roles
User groups and roles are essential for managing user permissions and access levels within a realm. You can either create or import user groups into a realm.
To import a super admin group with the necessary default user roles, download and run the provided script.
Instructions:
* Unzip the downloaded file.
* Open a terminal and navigate to the unzipped folder.
* Run the script using the appropriate command for your operating system.
After importing, add an admin user to the group and assign the necessary roles.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/key1.png)
Check the default roles to ensure correct import:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/key2.png)
**Common Pitfalls**:
* Ensure the script has the necessary permissions to run on your system.
* Verify that the roles and groups align with your organizational structure.
## Creating new users
Creating new users is a fundamental part of managing access within Keycloak. Follow these steps to create a new user in a realm and generate a temporary password:
In the left menu bar, click **Users** to open the user list page.
On the right side of the empty user list, click **Add User**.
Fill in the user details and set **Email Verified** to **Yes**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/key3.png)
In the **Groups** section, search for a group, in our case: `FLOWX_SUPER_USERS` and click **Join**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_6.png)
Save the user, go to the **Credentials** tab, and set a temporary password. Ensure the **Temporary** checkbox is checked.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_7.png)
**Common Pitfalls**:
* Ensure that the email address is valid and correctly formatted.
* Set the temporary password policy according to your organization’s security requirements.
## Adding clients
A client represents an instance of an application and is associated with a specific realm.
### Adding an OAuth 2.0 client
We'll add a client named `flowx-platform-authenticate`, which will be used for login, logout, and refresh token operations by web and mobile apps.
Click **Clients** in the top left menu, then click **Create client**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak11.png)
In the **General Settings** tab configure the following properties:
* Set a client ID to `{your-client-name}-authenticate`.
* Set the **Client type** to `OpenID Connect`.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_8.png)
Now click **Next** and configure the **Capability config** details:
* Enable **Direct Access Grants**.
* Enable **Implicit Flow Enabled**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_9.png)
Click **Next** and configure **Login settings**:
* Set **Valid redirect URIs**, specifying a valid URI pattern that a browser can redirect to after a successful login or logout, simple wildcards are allowed.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_10.png)
After creating the client, scroll down in the **Settings** tab and configure additional settings - **Logout Settings**:
* **Backchannel Logout Session Required**: Toggle OFF.
* **Front Channel Logout**: Toggle OFF.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak14.png)
Add **mappers** to `{your-client-name}-authenticate` client.
For instructions on adding mappers and understanding which mappers to add to your clients, refer to the section on [**Adding Protocol Mappers**](#adding-protocol-mappers).
### Adding an Authorizing client
To authorize REST requests to microservices and Kafka, create and configure the `{your-client-name}-platform-authorize` client.
Enter the client ID (`{your-client-name}-platform-authorize`).
Set **Client type** to **OpenID Connect**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak12.png)
**Client Authentication**: Toggle ON
This setting defines the type of the OIDC client. When enabled, the client type is set to "confidential access." When disabled, it is set to "public access".
Disable **Direct Access Grants**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak13.png)
**Valid Redirect URIs**: Populate this field with the appropriate URIs.
Save the configuration.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak15.png)
After creating the client, scroll down in the **Settings** tab and configure additional settings - **Logout Settings**:
* **Backchannel Logout Session Required**: Toggle OFF.
* **Front Channel Logout**: Toggle OFF.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak14.png)
Once you have configured these settings, the `{your-client-name}-platform-authorize` client will be created and can be used to authorize REST requests to microservices and Kafka within your application.
#### Example configuration for microservices
Below is an example of a minimal configuration for microservices using OAuth2 with the `{your-client-name}-platform-authorize` client:
```yaml
security:
type: oauth2 #Specifies the security type as OAuth2.
basic:
enabled: false #Disables basic authentication.
oauth2:
base-server-url: http://localhost:8080 #Sets the base server URL for the Keycloak server
realm: flowx #Specifies the Keycloak realm
client:
access-token-uri: ${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/token
client-id: your-client-name-platform-authorize
client-secret: CLIENT_SECRET
resource:
user-info-uri: ${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/userinfo
```
| Configuration Key | Value/Example | Description |
| ----------------------------------------- | ----------------------------------------------------------------------------------------------------- | -------------------------------------------------- |
| `security.type` | `oauth2` | Specifies the security type as OAuth2. |
| `security.basic.enabled` | `false` | Disables basic authentication. |
| `security.oauth2.base-server-url` | `http://localhost:8080` | Sets the base server URL for the Keycloak server. |
| `security.oauth2.realm` | `flowx` | Specifies the Keycloak realm. |
| `security.oauth2.client.access-token-uri` | `${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/token` | Defines the URL for obtaining access tokens. |
| `security.oauth2.client.client-id` | `your-client-name-platform-authorize` | Sets the client ID for authorization. |
| `security.oauth2.client.client-secret` | `CLIENT_SECRET` | Provides the client secret for authentication. |
| `security.oauth2.resource.user-info-uri` | `${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/userinfo` | Specifies the URL for retrieving user information. |
## Adding protocol mappers
Protocol mappers in Keycloak allow you to transform tokens and documents, enabling actions such as mapping user data into protocol claims and modifying requests between clients and the authentication server. This provides greater customization and control over the information contained in tokens and exchanged during authentication processes.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_11.png)
To enhance your clients' functionality, add the following common mappers:
* [Group Membership mapper ](#group-membership-mapper) (`realm-groups`)
* Maps user groups to the authorization token.
* [User Attribute mapper](#user-attribute-mapper) (`business filter mapper`)
* Maps custom attributes, for example, mapping the [businessFilters ](/4.0/docs/platform-deep-dive/user-roles-management/business-filters) list, to the token claim.
* [User Realm role](#user-realm-role) (`realm-roles`)
* Maps a user's realm role to a token claim.
The mappers we use can also be configured to control the data returned by the `/userinfo` endpoint, in addition to being included in tokens. This capability is a feature that not all Identity Providers (IDPs) support.
By incorporating these mappers, you can further customize and enrich the information contained within your tokens, ensuring they carry the necessary data for your applications.
### Group Membership mapper
Steps to add a Group Membership mapper:
From the Keycloak admin console, go to **Clients** and select your desired client.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_gm.png)
In the client settings, click on **Client Scopes**.
Select the **dedicated client scope**: `{your-client-name}-authenticate-dedicated` to open its settings.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_gm1.png)
Make sure the **Mappers** tab is selected within the dedicated client scope.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_gm2.png)
Click **Add Mapper**. From the list of available mappers, choose **Group Membership**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_gm3.png)
**Name**: Enter a descriptive name for the mapper to easily identify its purpose, for example `realm-groups`.
**Token Claim Name**: Set the token claim name, typically as `groups`, for including group information in the token.
**Add to ID Token**: Toggle OFF.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/gm5.png)
By configuring the group membership mapper, you will be able to include the user's group information in the token for authorization purposes.
### User Attribute mapper
To include custom attributes such as business filters in the token claim, follow these steps to add a user attribute mapper:
From the Keycloak admin console, go to **Clients** and select your desired client.
Click on **Client Scopes** and choose `{your-client-name}-authenticate-dedicated` to open its settings.
Ensure the **Mappers** tab is selected.
Click **Add mapper**. From the list of available mappers, select **User Attribute**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_ua1.png)
* **Mapper Type**: Select **User Attribute**.
* **Name**: Enter a descriptive name for the mapper, such as "Business Filters Mapper".
* **User Attribute**: Enter `businessFilters`.
* **Token Claim Name**: Enter `attributes.businessFilters`.
* **Add to ID Token**: Toggle OFF.
* **Multivalued**: Toggle ON.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_ua2.png)
By adding this user attribute mapper, the custom attribute "businessFilters" will be included in the token claim under the name "attributes.businessFilters". This enables you toto access and utilize business filters information within your application.
For more information about business filters, refer to the following section:
### User realm role mapper
To add a roles mapper to the `{your-client-name}-authenticate` client, so roles will be available in the OAuth user info response, follow these steps:
From the Keycloak admin console, go to **Clients** and select your desired client.
Click on **Client Scopes** and choose `{your-client-name}-authenticate-dedicated` to open its settings.
Ensure the **Mappers** tab is selected.
Click **Add Mapper**. From the list of available mappers, select **User Realm Role**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keycloak_ua3.png)
* **Name**: Enter a descriptive name for the mapper, such as "Roles Mapper".
* **Mapper Type**: Select **User Realm Role**.
* **Token Claim Name**: Enter `roles`.
* **Add to ID Token**: Toggle OFF.
* **Add to access token**: Toggle OFF.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/keucloak_ua4.png)
By adding this roles mapper, the assigned realm roles of the user will be available in the OAuth user info response under the claim name "roles". This allows you to access and utilize the user's realm roles within your application.
Please note that you can repeat these steps to add multiple roles mappers if you need to include multiple realm roles in the token claim.
### Examples
#### Login
To request a login token:
```curl
curl --location --request POST 'http://localhost:8080/realms/flowx/protocol/openid-connect/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'grant_type=password' \
--data-urlencode 'username=admin@flowx.ai' \
--data-urlencode 'password=password' \
--data-urlencode 'client_id= your-client-name-authenticate'
```
#### Refresh token
To refresh an existing token:
```curl
curl --location --request POST 'http://localhost:8080/realms/flowx/protocol/openid-connect/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'grant_type=refresh_token' \
--data-urlencode 'client_id= your-client-name-authenticate' \
--data-urlencode 'refresh_token=ACCESS_TOKEN'
```
#### User info
To retrieve user information:
```
curl --location --request GET 'localhost:8080/realms/flowx/protocol/openid-connect/userinfo' \
--header 'Authorization: Bearer ACCESS_TOKEN' \
```
## Adding service accounts
**What is a service account?**
A service account grants direct access to the Keycloak API for a specific component. Each client can have a built-in service account that allows it to obtain an access token.
To use this feature you must enable the **Client authentncation** (access type) for your client. When you do this, the **Service Accounts Enabled** switch will appear.
### Admin service account
The admin service account is used by the admin microservice to connect with Keycloak, enabling user and group management features within the FlowX.AI Designer.
Steps to add an Admin service account:
Navigate to **Clients** and select **Create client**.
Enter a **Client ID** for your new client.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/admin-sa1.png)
* Enable **Client authentication** (access type).
* Disable **Standard flow**.
* Disable **Direct access grants**.
* Enable **Service accounts roles**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/admin-sa2.png)
After creating the client, scroll down in the **Settings** tab and configure additional settings - **Logout Settings**:
* **Backchannel Logout Session Required**: Toggle OFF.
* **Front Channel Logout**: Toggle OFF.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/admin-sa3.png)
In the newly created client, navigate to the **Service accounts roles** tab.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/admin-sa5.png)
Click **Assign role** and in the Filter field, select **Filter by clients**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/admin-sa6.png)
Assign the necessary roles to the admin service account based on the required access scopes, such as:
* **manage-realm**
* **manage-users**
* **query-users**
In the end, you should have something similiar to this:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/md1.png)
Ensure you have created a realm-management client to include the necessary client roles.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/admin-sa7.png)
The admin service account does not require mappers as it doesn't utilize roles. Service account roles include client roles from `realm-management`.
For more detailed information on admin access rights, refer to the following section:
### Task Management service account
The task management service account facilitates process initiation and enables the use of the task management plugin (requiring the `FLOWX_ROLE` and role mapper), and access data from Keycloak.
Steps to Add a Task Management service account:
Follow steps **1**-**3** as in the Admin Service account configuration: [Admin service account](#admin-service-account).
Assign the necessary service accounts client roles to the Task Management plugin service account based on the required access scopes, such as:
* **view-realm**
* **view-users**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/admin-sa8.png)
The task management plugin service account requires a realm roles mapper to function correctly. Make sure to configure this to ensure proper operation.
In the end, you should have something similiar to this:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/tsk_sa.png)
In the newly created task management plugin service account, select **Client Scopes**:
Click on `{your-client-name}-service-account` to open its settings.
Ensure the Mappers tab is selected within the dedicated client scope.
Click **Add mapper**. From the list of available mappers, select **User Realm Role**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/tsk-mapper.png)
**Name**: Enter a descriptive name for the mapper to easily identify its purpose, for example `realm-roles`.
**Token Claim Name**: Set it to `roles`.
Disable **Add to ID token**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/gm7.png)
Click **Save**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/md2.png)
Assign the `FLOWX_ROLE` service account realm role (used to grant permissions for starting processes).
The `FLOWX_ROLE` is used to grant permissions for starting processes in the FlowX.AI Designer platform. By default, this role is named `FLOWX_ROLE`, but its name can be changed from the application configuration of the Engine by setting the following environment variable:
`FLOWX_PROCESS_DEFAULTROLES`
For more information about task management plugin access rights, check the following section:
### Process engine service account
The process engine requires a process engine service account to make direct calls to the Keycloak API.
This service account is also needed to be able to use [**Start Catch Event**](../../docs/building-blocks/node/message-events/message-catch-start-event) node.
**To create the process engine service account**:
* **1-3**: Follow the same steps as in the Admin Service Account Configuration: [Admin service account](#admin-service-account):
To assign the necessary service account roles:
This service account does not require service account client roles. It needs a realm role (to be able to start process instances) and realm-roles mapper.
3. Add the `FLOWX_ROLE` service account realm role (used to grant permissions for starting processes):
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/flowx_role.gif)
In the end, you should have something similiar to this:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/md3.png)
4. Add a **realm-roles** mapper:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/realm_roles_engine.gif)
### Scheduler service account
This service account is used for [**Start Timer Event**](../../docs/building-blocks/node/timer-events/timer-start-event) node. The registered timers in the scheduler require sending a process start message to Kafka. Authentication is also necessary for this operation.
The configuration is similiar to the **process engine service account**:
* Assign the `FLOWX_ROLE` as service account role (this is needed to run process instances).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/md4.png)
* Add a **realm-roles** mapper (as shown in the example for process-engine service account).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/release40/md5.png)
### Integration Designer service account
The Integration Designer service account is used by the integration designer microservice to interact securely with Keycloak, enabling it to manage various integrations within the FlowX.AI platform.
Steps to set up an Integration Designer service account:
* In Keycloak, navigate to **Clients** and select **Create client**.
* Enter a **Client ID** for your new client (e.g., `integration-designer-sa`).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/keycloak/id-sa.png)
* Enable **Client authentication** under access type.
* Enable **Service accounts roles** to allow the account to manage integrations.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/4.5/keycloak/Screenshot%202024-11-05%20at%2013.58.06.png)
* Skip the **Login settings** page.
* Click **Save** to apply the configuration.
For further details on configuring access rights and roles for the Integration Designer service account, refer to the following section:
By following these steps, you will have a minimal Keycloak setup to manage users, roles, and applications efficiently. For more detailed configurations and advanced features, refer to the official Keycloak documentation.
# Configuring an IAM solution (EntraID)
This guide provides step-by-step instructions for configuring a minimal EntraId setup to manage users, roles, and applications efficiently.
## Overview
Microsoft Entra is Microsoft’s unified identity and access management solution designed to protect access to applications, resources, and user data across an organization’s environment. It provides a robust identity and access management (IAM) framework, allowing secure access control, role management, and integration of various applications under a single directory. Entra is crucial for managing multi-cloud and hybrid environments securely, enforcing policies, and supporting both on-premises and cloud resources.
## Prerequisites
* Application Administrator role
* Basic understanding of IAM and OIDC concepts
## Recommended EntraID setup
This setup configures Microsoft Entra to manage and secure access for FlowX.AI applications, handling user roles, custom attributes, and application-specific permissions. The setup covers these main components:
* Flowx-Web and Flowx-API are the core applications that act as entry points for the FlowX.AI platform. Additional applications like Flowx-Admin, Task Management Plugin, and Scheduler Core are registered to support specific functionalities.
* Each application registration includes settings for authentication, API permissions, and role assignments.
* Configures OAuth 2.0 and OIDC protocols, enabling secure access to resources.
* Roles and permissions are assigned through Entra, and single sign-on (SSO) is set up for ease of access across applications.
* Token Configuration includes defining claims (e.g., `email`, `groups`) for use in JWTs, which are used for secure identity validation across services.
* API Permissions are managed using Microsoft Graph, which governs access to resources like user profiles and groups within FlowX.AI.
Custom attribute extensions (e.g., `businessFilter`) allow organizations to apply additional filters or metadata to user and group profiles, configured and managed using Microsoft Graph CLI.
* Helm charts provide a structured setup for deploying FlowX.AI applications in containerized environments.
* Key values such as `tenant_id`, `client_id`, and `client_secret` are configured to support authentication and secure access.
JWT tokens are configured to carry user claims, roles, and custom attributes, ensuring that each token provides comprehensive identity details for FlowX.AI applications.
### Flowx-web app registration
The Flowx-web application serves as the main entry point for logging into FloWX Designer or container applications.
#### Appplication registration steps
To register the Flowx-web application, follow these steps:
1. Navigate to [https://portal.azure.com](https://portal.azure.com) and log in to your EntraID directory, which will host your FlowX.AI application registrations.
2. Go to **Microsoft EntraID > App registrations > New registration**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/image%20%283%29.png)
3. Enter a name for your application, then select **Accounts in this organizational directory only (Single tenant)** to limit access to your organization’s directory.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2010.56.35.png)
4. Click **Register** to complete the setup.
You will be redirected to the overview of the newly created app registration.
#### Authentication steps
Follow these steps to configure authentication for the Flowx-web application:
1. Go to the **Authentication** tab. Under **Platform configurations**, add a new platform by selecting **Single-page application (SPA)**. Then, set the **Redirect URIs** to point to the URIs of your Designer application.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/2024-11-04%2010.54.45.gif)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2010.58.31.png)
2. Click **Configure** to save the platform settings.
3. Next, click **Add URI** to include an additional redirect URI, this time pointing to your container application's URI.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2017.14.29.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2011.31.19.png)
4. Click **Save** to confirm the redirect URI changes.
5. Scroll down to **Advanced Settings**. Under **Mobile and Desktop Applications**, toggle **Enable the following mobile and desktop flows** to **Yes**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2017.16.55.png)
6. Click **Save** again to apply all changes.
#### API permissions
To configure the necessary API permissions, follow these steps:
1. Navigate the **API permissions** tab and click **Add a permission**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2011.33.31.png)
2. In the permissions menu, select **Microsoft Graph** and then choose **Delegated permissions**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2011.35.56.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2011.36.46.png)
3. Add the following permissions by selecting each option under **OpenId permissions**:
* email
* offline\_access
* openid
* profile
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2011.38.35.png)
4. After adding these permissions, click **Add permissions** to confirm.
#### Token configuration
Configure the claims you want to include in the ID token.
1. Navigate to the **Token configuration** tab. Click **Add optional claim**, then select **Token type > Access**.
* Choose the following claims to include in the token:
* email
* family\_name
* given\_name
* preferred
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2011.40.21.png)
2. Click **Add** to save these optional claims.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2011.41.33.png)
3. Next, add group claims to the token by clicking **Add groups claim**.
4. Select **All groups** and, under each token type, select **sAMAccountName** (this may differ for your specific organization).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/image%20%285%29.png)
#### Setting token expiration policies
For organizations that require specific control over token lifetimes, Microsoft Entra allows customization of token expiration policies.
1. **Create a Custom Token Lifetime Policy**: Define the desired expiration settings for access, ID, and refresh tokens in the policy.
2. **Assign the Policy to a Service Principal**: Apply the policy to your Flowx-web or Flowx-API app registrations to enforce token lifetime requirements.
For more details on creating and assigning policies for token expiration, refer to [**Microsoft's guide on configuring token lifetimes**](https://learn.microsoft.com/en-us/entra/identity-platform/configure-token-lifetimes#create-a-policy-and-assign-it-to-a-service-principal).
Adjusting token lifetimes can enhance security by reducing the window for unauthorized access.
***
### Flowx-API app registration
The Flowx-API application is used to configure the access token necessary for secure communication between the browser and all exposed FlowX APIs.
#### Appplication registration steps
To register the Flowx-API application, follow these steps:
1. Navigate to [https://portal.azure.com](https://portal.azure.com) and log in to your EntraID directory, which will host your FlowX.AI application registrations.
2. Go to **Microsoft EntraID > App registrations > New registration**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/image%20%283%29.png)
3. Enter a name for your application, then select **Accounts in this organizational directory only (Single tenant)** to limit access to your organization’s directory.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2012.04.27.png)
4. Click **Register** to complete the setup.
You will be redirected to the overview page of the newly created app registration.
#### API permissions
To configure the necessary API permissions, follow these steps:
1. Go to the **API permissions** tab and click **Add a permission**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/ent1.png)
2. In the permissions menu, select **Microsoft Graph** and then choose **Delegated permissions**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2011.35.56.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2011.36.46.png)
3. Add the following permissions by selecting each option under **OpenId permissions**:
* email
* offline\_access
* openid
* profile
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2011.38.35.png)
4. After adding these permissions, click **Add permissions** to confirm.
#### Token configuration
Configure the claims you want to include in the ID token.
1. Navigate to the **Token configuration** tab. Click **Add optional claim**, then select **Token type > Access**.
* Choose the following claims to include in the token:
* email
* family\_name
* given\_name
* preferred
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2015.36.40.png)
2. Click **Add** to save these optional claims.
3. Next, add group claims to the token by clicking **Add groups claim**.
4. Select **All groups** and, under each token type, select **sAMAccountName** (this may differ for your specific organization).
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2016.14.01.png)
#### Expose an API
To configure the API exposure and define scopes:
1. In the **Expose an API** section, click **Add** under **Application ID URI**. It’s recommended to use the application’s name for consistency.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2018.11.53.png)
2. Click **Save**.
3. Under **Scopes defined by this API**, click **Add a scope** and configure it as follows:
* **Scope name**: `FlowxAI.ReadWrite.All`
* **Who can consent**: Admins and users
* **Admin consent display name**: Full API Access for FlowX.AI Platform
* **Admin consent description**: Grants this application full access to all available APIs, allowing it to read, write, and manage resources across the FlowX.AI platform.
* **User consent display name**: Same as admin consent display name
* **User consent description**: Same as admin consent description
* **State**: Enabled
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2018.16.52.png)
This scope is not used directly to grant permissions. Instead, it is included in login requests made from a web client. When a client makes a login request with this scope, Entra ID uses it to identify and provide the appropriate access token configured here, ensuring secure access.
4. Under **Authorized client applications**, click **Add a client application**. Add each of the following client applications, selecting the `FlowxAI.ReadWrite.All` scope:
* flowx-web
* flowx-admin
* flowx-process-engine
* flowx-integration-designer
* flowx-task-management-plugin
* flowx-scheduler-core
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2018.38.41.png)
Client IDs for these applications can be found on the **Overview** page of each respective application. If some applications are not created yet, you can return and add them to this section after their creation.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2018.36.39.png)
#### Application roles
To configure application roles, follow these steps:
1. Navigate to **App roles** and click **Create app role**.
2. Select **Allowed member types** select **Both (Users/Groups + Applications)**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2018.43.05.png)
3. Complete the role details with the following information:
* Display name: The name displayed for the role.
* Value: A unique identifier for the role within applications.
* Description: A description of the role’s purpose.
App role list should be the same as the Keycloak setup. A list of default roles can be found [**here**](default-roles).
### Other applications registration
The following FlowX.AI applications require similar steps for registration:
* Flowx-Admin
* Flowx-Process-Engine
* Flowx-Integration-designer
* Flowx-Task-Management-Plugin
* Flowx-Scheduler-Core
***
### Flowx-Admin app registration
1. **Create a New Application Registration**
* Go to [https://portal.azure.com](https://portal.azure.com) and log in to your Entra ID directory where you will host FlowX.AI application registrations.
2. **Register the Application**
* Navigate to **Microsoft Entra ID > App registrations > New registration**.
* Set the application name and select **Accounts in this organizational directory only (Single tenant)**.
* Click **Register**. You will be redirected to the overview page of the newly created app registration.
You will now see the overview for your new app registration.
#### Configure client secrets
1. Navigate to **Certificates & secrets**.
2. Under **Client secrets**, click **New client secret**.
3. Set a **description** and choose an **expiration time** for the client secret, then click **Add**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2018.57.31.png)
Copy the generated client secret value. This will be used to configure `SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENT_SECRET`.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2018.59.12.png)
#### Configure API permissions
1. Go to the **API permissions** tab and click **Add a permission**.
2. Select **Microsoft Graph > Application permissions**.
3. Add the following permissions for **flowx-admin**:
* **Application.Read.All**
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2019.01.20.png)
If you have admin privileges, you can click **Grant admin consent** to apply these permissions. If not, contact your tenant administrator to grant consent.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2019.03.00.png)
***
### Flowx-Process-Engine app registration
Follow the same **Application Registration Steps** and **Configure Client Secrets** steps as above.
#### Configure API permissions
* **API Permissions**: No additional permissions required.
***
### Flowx-Integration-Designer app registration
Follow the same **Application Registration Steps** and **Configure Client Secrets** steps as above.
#### Configure API permissions
* **API Permissions**: No additional permissions required.
***
### Flowx-Task-Management-Plugin app registration
Follow the same **Application Registration Steps** and **Configure Client Secrets** steps as above.
#### Configure API permissions
1. Go to the **API permissions** tab and click **Add a permission**.
2. Select **Microsoft Graph > Application permissions**.
3. Add the following permissions for **flowx-task-management-plugin**:
* **Application.Read.All**
* **Group.Read.All**
* **User.Read.All**
If you have admin privileges, you can click **Grant admin consent** to apply these permissions. If not, contact your tenant administrator to grant consent.
***
### Flowx-Scheduler-Core app registration
Follow the same **Application Registration Steps** and **Configure Client Secrets** steps as above.
#### Configure API permissions
* **API Permissions**: No additional permissions required.
***
### Assigning a role to a user/group
To assign a role to a user or group for your FlowX.AI applications, follow these steps:
1. Go to [https://portal.azure.com](https://portal.azure.com) and log in to your Entra ID directory that hosts your FlowX.AI application registrations.
2. Navigate to **Microsoft Entra ID > Enterprise applications** and search for your **flowx-api** app registration name.\
(An enterprise application with the same name was automatically created when the app registration was set up.)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2019.11.43.png)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2019.12.51.png)
3. Under **Users and groups**, select **Add user/group**.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2019.14.17.png)
* Choose the user or group you want to assign.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2019.16.53.png)
* Select the appropriate role from the available options.
4. Click **Assign** to complete the role assignment.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2019.19.39.png)
It is recommended to provide roles through group membership for easier management and scalability.
***
### Adding custom attributes
Using Microsoft Graph CLI, you can add custom attributes such as `businessFilter`.
For more information about Microsoft Graph CLI, check the following docs:
#### Prerequisites
* [Install the Microsoft Graph CLI](https://learn.microsoft.com/en-us/graph/cli/installation?tabs=macos).
#### Create an attribute extension property
Create an Attribute Extension Property on the flowx-api app registration:
1. Log in to Microsoft Graph CLI with the necessary permissions:
```bash
$ mgc login --scopes Directory.Read.All
```
You can add additional permissions by repeating the mgc login command with the new permission scopes.
2. Create the attribute extension property by running the following command. Replace `` with the object ID of your flowx-api application:
```bash
$ mgc applications extension-properties create --application-id --body '
{
"dataType": "String",
"name": "businessFilter",
"targetObjects": [
"User", "Group"
]
}'
```
#### Retrieve the attribute extension name
To confirm the attribute extension name, use the command below. This will output the exact name of the created extension property.
```bash
$ mgc applications extension-properties list --application-id --select name
```
Example output:
```json
{
"@odata.context": "https://graph.microsoft.com/v1.0/$metadata#applications(\u0027\u0027)/extensionProperties(name)",
"value": [
{
"name": "extension_ec959542898b42bcb6922e7d3f9df282_businessFilter"
}
]
}
```
#### Configure token claim
1. Go to the **flowx-api** app registration in the Azure portal.
2. Navigate to **Token configuration**.
3. Click **Add optional claim**.
* Select **Token type** as **Access**.
* Check the box for `extn.businessFilter`.
4. Click Add to save the changes.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/Screenshot%202024-11-04%20at%2019.31.05.png)
#### Assign attribute extension to a user
1. Log in with the required permissions to modify user attributes:
```bash
$ mgc login --scopes User.ReadWrite.All
```
2. Assign the `businessFilter` attribute to a user by running the command below. Replace `` with the user's `object ID`:
```bash
$ mgc users patch --user-id --body '
{
"extension_ec959542898b42bcb6922e7d3f9df282_businessFilter": "docs"
}'
```
#### Assign attribute extension to a group
Follow similar steps to assign the `businessFilter` attribute to a group. Replace `` with the group’s `object ID` and use the following command:
1. Log in with the required permissions to modify group attributes:
```bash
$ mgc login --scopes User.ReadWrite.All
```
2. Assign the custom attribute by the command below, replacing `` with the user’s object ID. The businessFilter attribute is set to "docs" in this example.
```bash
$ mgc groups patch --group-id --body '
{
"extension_ec959542898b42bcb6922e7d3f9df282_businessFilter": "docs"
}'
```
***
## Example JWT token for user
To verify that the custom attributes and roles have been correctly applied, you can inspect a sample JWT token issued to a user. This token will include standard claims along with any custom attributes and roles configured in your Entra ID setup.
### Steps to retrieve a JWT token
1. **Login to the FlowX.AI Application**\
Log in to the FlowX.AI application as the user for whom the JWT token needs to be inspected.
2. **Retrieve the JWT Token**\
After logging in, retrieve the JWT token by one of the following methods:
* Using browser developer tools to inspect network requests (look for requests with an `Authorization` header).
* Accessing a token endpoint if available, using a tool like Postman.
3. **Decode the JWT Token**\
Use a JWT decoding tool, such as [jwt.io](https://jwt.io/), to decode and inspect the token.
***
### Sample JWT token structure
Below is an example JWT token structure that includes key claims, custom attributes, and roles:
```json
{
"aud": "api://rd-p-example-flowx-api",
"iss": "https://sts.windows.net/673cac6c-3d63-40cf-a43f-07408dd91072/",
"iat": 1730720856,
"nbf": 1730720856,
"exp": 1730726397,
"acr": "1",
"aio": "ATQAy/8YAAAAj3ca5D/znraYUsif7RVc7TmWJPj66tqsUon0oon1xPamN1W7wN070R1JwaCwUQyQ",
"amr": [
"pwd"
],
"appid": "673b5314-a9c8-40ec-beb5-636fa9a781b4",
"appidacr": "0",
"email": "john.doe@flowx.ai",
"extn.businessFilter": [
"docs"
],
"family_name": "Doe",
"given_name": "John",
"groups": [
"ef731a0d-b44f-44da-bd78-67363c901bb1",
"db856713-0dfa-4d3d-aefa-bbb598257084",
"4336202b-6fc4-4132-afab-7f6573993325",
"5dc0b52e-823b-4ce9-b3e4-b3070912a4ef",
"ce006d40-555f-4247-890b-1053fa3cb172",
"291ac248-4e29-4c91-8e1d-19cbeec64eb8",
"b82dc551-f3f0-4d28-aaf0-a0a74fe3b3e3",
"42b39b5f-7545-48be-88d1-6e88716236db",
"cc0f776a-1cb2-4b8c-a472-8e1393764442",
"6eac9487-e04c-41e6-81ce-364f09c22bbf",
"01c30789-6862-4085-b5c4-f0cb02fb41b0",
"75ac188b-61c4-4aa9-ad7e-af1d543e199a",
"e726fda5-79f0-440b-b86c-8a9820d14d2e",
"259980bb-e881-4d93-9912-d2562441a257",
"9146edd4-6194-4487-b524-79956635f514",
"ce046ce2-6ef8-40f2-9f4e-a70f1ca14ecf",
"62d1f9f5-858c-43e2-af92-94bcc575681b",
"69df5ff6-1da9-49d1-9871-b7e62de2b212",
"043d25fc-a507-47ee-83e3-1d31ce0d9b35"
],
"ipaddr": "86.126.6.183",
"name": "Jonh Doe",
"oid": "61159071-3fd6-4373-8aec-77ee58675776",
"preferred_username": "john.doe@flowx.ai",
"rh": "1.AYEAbKw8Z2M9z0CkPwdAjdkQckKVleyLibxCtpIufT-d8oKBAAeBAA.",
"roles": [
"ROLE_DOCUMENT_TEMPLATES_IMPORT",
"FLOWX_ROLE",
"ROLE_TASK_MANAGER_OOO_ADMIN",
"ROLE_ADMIN_MANAGE_CONFIG_IMPORT",
"ROLE_INTEGRATION_SYSTEM_READ",
"ROLE_INTEGRATION_WORKFLOW_READ_RESTRICTED",
"ROLE_ADMIN_MANAGE_PROCESS_ADMIN",
"ROLE_MANAGE_NOTIFICATIONS_ADMIN",
"ROLE_ADMIN_MANAGE_INTEGRATIONS_EDIT",
"ROLE_TASK_MANAGER_HOOKS_IMPORT",
"ROLE_THEMES_READ",
"ROLE_ENGINE_MANAGE_PROCESS_EDIT",
"ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_ADMIN",
"ROLE_AI_OPTIMIZER_EDIT",
"ROLE_ENGINE_MANAGE_INSTANCE_READ",
"ROLE_ADMIN_MANAGE_INTEGRATIONS_ADMIN",
"ROLE_ADMIN_MANAGE_PROCESS_IMPORT",
"ROLE_COPO_TELLER",
"ROLE_CMS_CONTENT_READ",
"ROLE_CMS_CONTENT_IMPORT",
"ROLE_INTEGRATION_WORKFLOW_ADMIN",
"ROLE_AI_ARCHITECT_EDIT",
"ROLE_TASK_MANAGER_HOOKS_READ",
"ROLE_AI_WRITER_EDIT",
"ROLE_TASK_MANAGER_HOOKS_ADMIN",
"ROLE_MEDIA_LIBRARY_EDIT",
"ROLE_CMS_TAXONOMIES_READ",
"ROLE_AI_INSPECTOR_EDIT",
"FLOWX_FRONTOFFICE",
"ROLE_START_EXTERNAL",
"ROLE_DOCUMENT_TEMPLATES_READ",
"ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_IMPORT",
"VIEW_INSTANCES",
"ROLE_ADMIN_MANAGE_INTEGRATIONS_IMPORT",
"ROLE_ADMIN_MANAGE_USERS_READ",
"ROLE_THEMES_ADMIN",
"ROLE_ADMIN_MANAGE_PLATFORM_READ",
"ROLE_TASK_MANAGER_OOO_EDIT",
"ROLE_CMS_TAXONOMIES_EDIT",
"ROLE_THEMES_IMPORT",
"ROLE_AI_DEVELOPER_EDIT",
"ROLE_MANAGE_NOTIFICATIONS_READ",
"ROLE_INTEGRATION_SYSTEM_EDIT",
"ROLE_MANAGE_NOTIFICATIONS_SEND",
"FLOWX_ADMIN",
"ROLE_INTEGRATION_READ",
"FLOWX_BACKOFFICE",
"ROLE_DOCUMENT_TEMPLATES_EDIT",
"ROLE_MEDIA_LIBRARY_IMPORT",
"ROLE_AI_ASSISTANT_READ",
"ROLE_ADMIN_MANAGE_CONFIG_EDIT",
"ROLE_CMS_TAXONOMIES_IMPORT",
"ROLE_ADMIN_MANAGE_CONFIG_ADMIN",
"ROLE_TASK_MANAGER_TASKS_READ",
"ROLE_DOCUMENT_TEMPLATES_ADMIN",
"ROLE_INTEGRATION_WORKFLOW_READ",
"ROLE_COPO_VIEWER",
"ROLE_MEDIA_LIBRARY_ADMIN",
"ROLE_NOTIFICATION_TEMPLATES_EDIT",
"ROLE_ADMIN_MANAGE_PROCESS_READ",
"ROLE_AI_INTEGRATOR_EDIT",
"ROLE_AI_SUPERVISOR_EDIT",
"ROLE_INTEGRATION_WORKFLOW_EDIT",
"ROLE_CMS_CONTENT_ADMIN",
"ROLE_CMS_CONTENT_EDIT",
"FLOWX_SUPERVISOR",
"ROLE_ADMIN_MANAGE_CONFIG_READ",
"ROLE_TASK_MANAGER_OOO_READ",
"ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_EDIT",
"ROLE_INTEGRATION_ADMIN",
"ROLE_NOTIFICATION_TEMPLATES_READ",
"ROLE_AI_AUDITOR_EDIT",
"ROLE_ENGINE_MANAGE_INSTANCE_ADMIN",
"ROLE_INTEGRATION_SYSTEM_ADMIN",
"ROLE_ADMIN_MANAGE_PROCESS_EDIT",
"ROLE_INTEGRATION_EDIT",
"ROLE_AI_DESIGNER_EDIT",
"ROLE_TASK_MANAGER_HOOKS_EDIT",
"ROLE_AI_COMMAND_READ",
"ROLE_CMS_TAXONOMIES_ADMIN",
"ROLE_ADMIN_MANAGE_USERS_ADMIN",
"ROLE_LICENSE_MANAGE_READ",
"ROLE_AI_ANALYST_EDIT",
"ROLE_TASK_MANAGER_VIEWS_ADMIN",
"ROLE_ADMIN_MANAGE_PLATFORM_ADMIN",
"ROLE_TASK_MANAGER_PROCESS_ALLOCATION_SETTINGS_READ",
"ROLE_AI_STRATEGIST_EDIT",
"ROLE_LICENSE_MANAGE_ADMIN",
"ROLE_THEMES_EDIT",
"ROLE_NOTIFICATION_TEMPLATES_ADMIN",
"ROLE_LICENSE_MANAGE_EDIT",
"ROLE_INTEGRATION_IMPORT",
"ROLE_NOTIFICATION_TEMPLATES_IMPORT",
"ROLE_ADMIN_MANAGE_INTEGRATIONS_READ",
"ROLE_ADMIN_MANAGE_USERS_EDIT",
"ROLE_MEDIA_LIBRARY_READ"
],
"scp": "FlowxAI.ReadWrite.All",
"sub": "tMG9A1npM9hK89AV9rdUvTAKVlli3oLkyI1E8F7bV5Y",
"tid": "673cac6c-3d63-40cf-a43f-07408dd91072",
"unique_name": "john.doe@flowx.ai",
"upn": "john.doe@flowx.ai",
"uti": "v3igRE_kEUqZC4nbXII3AA",
"ver": "1.0",
"wids": [
"9b895d92-2cd3-44c7-9d02-a6ac2d5ea5c3",
"fe930be7-5e62-47db-91af-98c3a49a38b1",
"e3973bdf-4987-49ae-837a-ba8e231c7286",
"158c047a-c907-4556-b7ef-446551a6b5f7",
"e8611ab8-c189-46e8-94e1-60213ab1f814",
"b79fbf4d-3ef9-4689-8143-76b194e85509"
]
}
```
## Configure helm charts
This section provides details on configuring Helm charts for FlowX.AI applications, including where to retrieve required values and setting environment variables for different application components.
***
### Where to get the values
* **tenant\_id**: The unique identifier for your Entra ID tenant.
![Tenant ID](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/image%20%286%29.png)
* **client\_id**: The client ID for the specific FlowX.AI application.
![Client ID](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/image%20%287%29.png)
* **client\_secret**: The client secret generated during app registration (only visible at creation).
![Client Secret](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/entra/image%20%288%29.png)
***
### Helm chart values
These configurations are required for different FlowX.AI application components. Substitute ``, ``, and `` with your specific values.
***
#### Designer
For the Designer component, use the following settings:
```yaml
SKIP_ISSUER_CHECK: true
STRICT_DISCOVERY_DOCUMENT_VALIDATION: false
KEYCLOAK_ISSUER: https://login.microsoftonline.com//v2.0
KEYCLOAK_CLIENT_ID:
KEYCLOAK_SCOPES: "openid profile email offline_access api://rd-e-example-flowx-api/FlowxAI.ReadWrite.All"
KEYCLOAK_REDIRECT_URI: https://flowx.example.az1.cloud.flowxai.dev
```
#### All Java application
```yaml
SECURITY_TYPE: jwt-public-key
SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_ISSUER_URI: https://sts.windows.net//
SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_JWK_SET_URI: https://login.microsoftonline.com//discovery/v2.0/keys
```
#### Java applications with a Service Principal
These settings apply to Java applications that require a service principal, such as Admin, Integration Designer, Process Engine, Scheduler Core, and Task Management Plugin.
```yaml
SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_MAINAUTHPROVIDER_TOKEN_URI: https://login.microsoftonline.com//oauth2/v2.0/token
SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENT_ID:
SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENT_SECRET:
SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_SCOPE: api://rd-p-example-flowx-api/.default
```
#### Java applications with access to Microsoft Graph API
The following configuration is required for Java applications that need access to the Microsoft Graph API, such as Admin and Task Management Plugin.
```yaml
OPENID_PROVIDER: entra
OPENID_ENTRA_TENANT_ID:
OPENID_ENTRA_PRINCIPAL_ID:
```
# Default roles
Below you can find the list of all the default roles that you can add or import into the Identity and Access Management solution to properly manage the access to all the FlowX.AI microservices.
## Default roles
A complete list of all the default roles based on modules (access scope):
| Module | Scopes | Role default value | Microservice |
| ---------------------------------- | ---------------- | ---------------------------------------------------------- | -------------------- |
| manage-platform | read | ROLE\_ADMIN\_MANAGE\_PLATFORM\_READ | Admin |
| manage-platform | admin | ROLE\_ADMIN\_MANAGE\_PLATFORM\_ADMIN | Admin |
| manage-processes | import | ROLE\_ADMIN\_MANAGE\_PROCESS\_IMPORT | Admin |
| manage-processes | read | ROLE\_ADMIN\_MANAGE\_PROCESS\_READ | Admin |
| manage-processes | edit | ROLE\_ADMIN\_MANAGE\_PROCESS\_EDIT | Admin |
| manage-processes | admin | ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN | Admin |
| manage-integrations | admin | ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_ADMIN | Admin |
| manage-integrations | read | ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_READ | Admin |
| manage-integrations | edit | ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_EDIT | Admin |
| manage-integrations | import | ROLE\_ADMIN\_MANAGE\_INTEGRATIONS\_IMPORT | Admin |
| manage-configurations | import | ROLE\_ADMIN\_MANAGE\_CONFIG\_IMPORT | Admin |
| manage-configurations | read | ROLE\_ADMIN\_MANAGE\_CONFIG\_READ | Admin |
| manage-configurations | edit | ROLE\_ADMIN\_MANAGE\_CONFIG\_EDIT | Admin |
| manage-configurations | admin | ROLE\_ADMIN\_MANAGE\_CONFIG\_ADMIN | Admin |
| manage-users | read | ROLE\_ADMIN\_MANAGE\_USERS\_READ | Admin |
| manage-users | edit | ROLE\_ADMIN\_MANAGE\_USERS\_EDIT | Admin |
| manage-users | admin | ROLE\_ADMIN\_MANAGE\_USERS\_ADMIN | Admin |
| manage-processes | edit | ROLE\_ENGINE\_MANAGE\_PROCESS\_EDIT | Engine |
| manage-processes | admin | ROLE\_ENGINE\_MANAGE\_PROCESS\_ADMIN | Engine |
| manage-instances | read | ROLE\_ENGINE\_MANAGE\_INSTANCE\_READ | Engine |
| manage-instances | admin | ROLE\_ENGINE\_MANAGE\_INSTANCE\_ADMIN | Engine |
| manage-licenses | read | ROLE\_LICENSE\_MANAGE\_READ | License |
| manage-licenses | edit | ROLE\_LICENSE\_MANAGE\_EDIT | License |
| manage-licenses | admin | ROLE\_LICENSE\_MANAGE\_ADMIN | License |
| manage-contents | import | ROLE\_CMS\_CONTENT\_IMPORT | CMS |
| manage-contents | read | ROLE\_CMS\_CONTENT\_READ | CMS |
| manage-contents | edit | ROLE\_CMS\_CONTENT\_EDIT | CMS |
| manage-contents | admin | ROLE\_CMS\_CONTENT\_ADMIN | CMS |
| manage-media-library | import | ROLE\_MEDIA\_LIBRARY\_IMPORT | CMS |
| manage-media-library | read | ROLE\_MEDIA\_LIBRARY\_READ | CMS |
| manage-media-library | edit | ROLE\_MEDIA\_LIBRARY\_EDIT | CMS |
| manage-media-library | admin | ROLE\_MEDIA\_LIBRARY\_ADMIN | CMS |
| manage-taxonomies | import | ROLE\_CMS\_TAXONOMIES\_IMPORT | CMS |
| manage-taxonomies | read | ROLE\_CMS\_TAXONOMIES\_READ | CMS |
| manage-taxonomies | edit | ROLE\_CMS\_TAXONOMIES\_EDIT | CMS |
| manage-taxonomies | admin | ROLE\_CMS\_TAXONOMIES\_ADMIN | CMS |
| manage-themes | admin | ROLE\_THEMES\_ADMIN | CMS |
| manage-themes | edit | ROLE\_THEMES\_EDIT | CMS |
| manage-themes | read | ROLE\_THEMES\_READ | CMS |
| manage-themes | import | ROLE\_THEMES\_IMPORT | CMS |
| manage-tasks | read | ROLE\_TASK\_MANAGER\_TASKS\_READ | Task Management |
| manage-hooks | import | ROLE\_TASK\_MANAGER\_HOOKS\_IMPORT | Task Management |
| manage-hooks | read | ROLE\_TASK\_MANAGER\_HOOKS\_READ | Task Management |
| manage-hooks | edit | ROLE\_TASK\_MANAGER\_HOOKS\_EDIT | Task Management |
| manage-hooks | admin | ROLE\_TASK\_MANAGER\_HOOKS\_ADMIN | Task Management |
| manage-process-allocation-settings | import | ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_IMPORT | Task Management |
| manage-process-allocation-settings | read | ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_READ | Task Management |
| manage-process-allocation-settings | edit | ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_EDIT | Task Management |
| manage-process-allocation-settings | admin | ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_ADMIN | Task Management |
| manage-out-of-office-users | import | ROLE\_TASK\_MANAGER\_OOO\_IMPORT | Task Management |
| manage-out-of-office-users | read | ROLE\_TASK\_MANAGER\_OOO\_READ | Task Management |
| manage-out-of-office-users | edit | ROLE\_TASK\_MANAGER\_OOO\_EDIT | Task Management |
| manage-out-of-office-users | admin | ROLE\_TASK\_MANAGER\_OOO\_ADMIN | Task Management |
| manage-notification-templates | import | ROLE\_NOTIFICATION\_TEMPLATES\_IMPORT | Notifications |
| manage-notification-templates | read | ROLE\_NOTIFICATION\_TEMPLATES\_READ | Notifications |
| manage-notification-templates | edit | ROLE\_NOTIFICATION\_TEMPLATES\_EDIT | Notifications |
| manage-notification-templates | admin | ROLE\_NOTIFICATION\_TEMPLATES\_ADMIN | Notifications |
| manage-notifications | import | ROLE\_MANAGE\_NOTIFICATIONS\_IMPORT | Notifications |
| manage-notifications | read | ROLE\_MANAGE\_NOTIFICATIONS\_READ | Notifications |
| manage-notifications | edit | ROLE\_MANAGE\_NOTIFICATIONS\_EDIT | Notifications |
| manage-notifications | admin | ROLE\_MANAGE\_NOTIFICATIONS\_ADMIN | Notifications |
| manage-document-templates | import | ROLE\_DOCUMENT\_TEMPLATES\_IMPORT | Documents |
| manage-document-templates | read | ROLE\_DOCUMENT\_TEMPLATES\_READ | Documents |
| manage-document-templates | edit | ROLE\_DOCUMENT\_TEMPLATES\_EDIT | Documents |
| manage-document-templates | admin | ROLE\_DOCUMENT\_TEMPLATES\_ADMIN | Documents |
| manage-systems | admin | ROLE\_INTEGRATION\_SYSTEM\_ADMIN | Integration Designer |
| manage-systems | import | ROLE\_INTEGRATION\_SYSTEM\_IMPORT | Integration Designer |
| manage-systems | read | ROLE\_INTEGRATION\_SYSTEM\_READ | Integration Designer |
| manage-systems | edit | ROLE\_INTEGRATION\_SYSTEM\_EDIT | Integration Designer |
| manage-systems | admin | ROLE\_INTEGRATION\_SYSTEM\_ADMIN | Integration Designer |
| manage-workflows | import | ROLE\_INTEGRATION\_WORKFLOW\_IMPORT | Integration Designer |
| manage-workflows | read\_restricted | ROLE\_INTEGRATION\_WORKFLOW\_READ\_RESTRICTED | Integration Designer |
| manage-workflows | read | ROLE\_INTEGRATION\_WORKFLOW\_READ | Integration Designer |
| manage-workflows | edit | ROLE\_INTEGRATION\_WORKFLOW\_EDIT | Integration Designer |
| manage-workflows | admin | ROLE\_INTEGRATION\_WORKFLOW\_ADMIN | Integration Designer |
## Importing roles
You can import a super admin group and its default roles in Keycloak using the following script file.
You need to edit the following script parameters:
* `baseAuthUrl`
* `username`
* `password`
* `realm`
* `the name of the group for super admins`
The requests package is needed in order to run the script. It can be installed with the following command:
```py
pip3 install requests
```
The script can be run with the following command:
```py
python3 importUsers.py
```
# FlowX Admin setup
The FlowX.AI Admin microservice manages process-related entities and provides the REST API used by the FlowX.AI Designer. The processes defined here will be handled by the FlowX.AI Engine. The Admin microservice uses most of the same resources as the FlowX.AI Engine.
## Infrastructure Prerequisites
Before setting up the Admin microservice, ensure the following components are properly set up:
* **Database Instance**: The Admin microservice connects to the same database as the FlowX.AI Engine.
## Dependencies
Ensure the following dependencies are met:
* **[Database](#database-configuration)**: Properly configured database instance.
* **[Datasource](#datasource-configuration)**: Configuration details for connecting to the database where also the FlowX.Ai Engine is connected.
* **Kafka cluster**: If you intend to use the [**FlowX.AI Audit**](../../4.0/docs/platform-deep-dive/core-extensions/audit) functionality, ensure that the backend microservice can connect to the Kafka cluster. When connected to Kafka, it sends details about all database transactions to a configured Kafka topic.
## Datasource configuration
To store process definitions the Admin microservice connects to the same Postgres / Oracle database as the Engine. Make sure to set the needed database connection details.
The following configuration details need to be added using environment variables:
* `SPRING_DATASOURCE_URL`: This environment variable is used to specify the URL of the database that the Admin microservice and Engine connect to. The URL typically includes the necessary information to connect to the database server, such as the host, port, and database name. It follows the format of the database's JDBC URL, which is specific to the type of database being used (e.g., PostgreSQL or Oracle).
* `SPRING_DATASOURCE_USERNAME`: This environment variable sets the username that the Admin microservice and Engine used to authenticate themselves when connecting to the database. The username is used to identify the user account that has access to the specified database.
* `SPRING_DATASOURCE_PASSWORD`: This environment variable specifies the password associated with the username provided in the `SPRING_DATASOURCE_USERNAME` variable. The password is used to authenticate the user and grant access to the database.
You will need to make sure that the user, password, connection link and db name are configured correctly, otherwise, you will receive errors at start time.
The database schema is managed by a [liquibase](https://www.liquibase.org/) script provided with the Engine.
### MongoDB configuration
The Admin microservice also connects to a MongoDB database instance for additional data management. Configure the MongoDB connection with the following environment variables:
* `SPRING_DATA_MONGODB_URI` - URI for connecting to the Admin MongoDB instance
* Format: `mongodb://${DB_USERNAME}:${DB_PASSWORD}@,,:/?retryWrites=false`
* `DB_USERNAME`: `data-model`.
* `SPRING_DATA_MONGODB_STORAGE` - Specifies the storage type used for the Runtime MongoDB instance (Azure environments only)
* **Possible Values:** `mongodb`, `cosmosdb`
* **Default Value:** `mongodb`
Ensure that the MongoDB configuration is compatible with the same database requirements as the FlowX.AI Engine, especially if sharing database instances.
## Kafka configuration
**Kafka** is used for saving audit logs and for using scheduled timer events. Only a producer needs to be configured. The environment variables that need to be set are:
* `KAFKA_BOOTSTRAP_SERVERS` - the Kafka bootstrap servers URL
* `KAFKA_TOPIC_AUDIT_OUT` - topic key for sending audit logs
* `KAFKA_TOPIC_PROCESS_SCHEDULED_TIMER_EVENTS_OUT_SET`
* `KAFKA_TOPIC_PROCESS_SCHEDULED_TIMER_EVENTS_OUT_STOP`
## Redis configuration
The following values should be set with the corresponding Redis-related values:
* `SPRING_REDIS_HOST`
* `SPRING_REDIS_PASSWORD`
## Logging
The following environment variables could be set in order to control log levels:
* `LOGGING_LEVEL_ROOT` - root spring boot microservice logs
* `LOGGING_LEVEL_APP` - app level logs
## Authorization & access roles
The following variables need to be set in order to connect to the identity management platform:
* `SECURITY_OAUTH2_BASE_SERVER_URL`
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID`
* `SECURITY_OAUTH2_REALM`
A specific service account should be configured in the OpenID provider to allow the Admin microservice to access realm-specific data. It can be configured using the following environment variables:
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_ID` - the openid service account username
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET` - the openid service account client secret
Configuration needed to clear the offline sessions of a user session from the identity provider solution:
* `FLOWX_AUTHENTICATE_CLIENTID`
## Elasticsearch
* `SPRING_ELASTICSEARCH_REST_URIS`
* `SPRING_ELASTICSEARCH_REST_DISABLESSL`
* `SPRING_ELASTICSEARCH_INDEX_SETTINGS_NAME`
* `SPRING_ELASTICSEARCH_REST_USERNAME`
* `SPRING_ELASTICSEARCH_REST_PASSWORD`
## Undo/redo actions
```yaml
flowx:
undo-redo:
ttl: 6000000 # Redis TTL for undoable actions by user+nodeid (in seconds)
cleanup:
cronExpression: "0 2 * * * *" # Every day at 2am
days: 2 # Items marked as deleted will be permanently removed if older than this number of days
```
# FlowX Advancing Controller setup
This guide provides step-by-step instructions to help you configure and deploy the Advancing Controller effectively.
## Infrastructure prerequisites
Before setting up the Advancing Controller, ensure the following components are properly set up:
* **FlowX.AI Engine Deployment**: The Advancing Controller depends on the FlowX Engine and must be deployed in the same environment. Refer to the FlowX Engine [**setup guide**](./flowx-engine-setup-guide/engine-setup) for more information on setting up the Engine.
* **Database Instance**: The Advancing Controller uses a PostgreSQL or OracleDB instance as its database.
## Dependencies
Ensure the following dependencies are met:
* [Database](#database-configuration): Properly configured database instance.
* [Datasource](#configuring-datasource): Configuration details for connecting to the database.
* [FlowX.AI Engine](./flowx-engine-setup-guide/engine-setup): Must be set up and running. Refer to the FlowX Engine setup guide.
### Database compatibility
The Advancing Controller supports both PostgreSQL and OracleDB databases. However, the FlowX.AI Engine and the Advancing Controller must be configured to use the same type of database at any given time. The FlowX.AI Engine employs two databases: one shared with the FlowX.AI Admin microservice for process metadata and instances, and the other dedicated to advancement.
Mixing PostgreSQL and OracleDB is not supported; both databases must be of the same type.
## Database configuration
### PostgreSQL
A basic PostgreSQL configuration for Advancing:
```yaml
postgresql:
enabled: true
postgresqlUsername: "postgres"
postgresqlPassword: ""
postgresqlDatabase: "advancing"
existingSecret: "postgresql-generic"
postgresqlMaxConnections: 200
persistence:
enabled: true
storageClass: standard-rwo
size: 20Gi
resources:
limits:
cpu: 1000m
memory: 1024Mi
requests:
memory: 256Mi
cpu: 100m
metrics:
enabled: true
serviceMonitor:
enabled: false
prometheusRule:
enabled: false
primary:
nodeSelector:
preemptible: "false"
```
If the parallel advancing configuration already exists, you must reset the 'advancing' database by executing the SQL command `DROP DATABASE advancing;`. Once the database has been dropped, the Liquibase script will automatically re-enable it.
## Configuration
The Advancing Controller uses a PostgreSQL or an Oracle database as a dependency.
* Ensure that the user, password, connection link, and database name are correctly configured. If these details are not configured correctly, errors will occur at startup.
* The datasource is configured automatically via a Liquibase script inside the engine. All updates will include migration scripts.
### Configuring datasource
If you need to change the datasource configuration detail, you can use the following environment variables:
* `SPRING_DATASOURCE_URL`: Environment variable used to configure a data source URL for a Spring application. It typically contains the JDBC driver name, the server name, port number, and database name.
* `SPRING_DATASOURCE_USERNAME`: Environment variable used to set the username for the database connection. This can be used to connect to a database instance.
* `SPRING_DATASOURCE_PASSWORD`: Environment variable used to store the password for the database connection. This can be used to secure access to the database and ensure that only authorized users have access to the data.
* `SPRING_JPA_DATABASE`: Specifies the type of database that the Spring application should connect to (accepted values: `oracle` or `postgresql`).
* `SPRING_JPA_PROPERTIES_HIBERNATE_DEFAULTSCHEMA` (❗️only for Oracle DBs): Specifies the default schema to use for the database (default value: `public`).
It's important to keep in mind that the Advancing Controller is tightly integrated with the FlowX.AI Engine. Therefore, it is crucial to ensure that both the Engine and the Advancing Controller are configured correctly and are in sync.
# Application manager setup
The Application Manager is a backend microservice for managing FlowX applications, libraries, versions, manifests, configurations, and builds. This guide provides detailed instructions for setting up the service and configuring its components to manage application-related operations effectively.
The Aplication manager is microservice that manages applications and also acts as a proxy for front-end requests related to resources.
## Infrastructure prerequisites
Before you start setting up the Application Manager service, ensure the following infrastructure components are in place:
* **PostgreSQL** - version 13 or higher for storing application data (could vary based on your preferred relational database)
* **MongoDB** - version 4.4 or higher for managing runtime builds
* **Redis** - version 6.0 or higher (if caching is required)
* **Kafka** - version 2.8 or higher for messaging and event-driven communication between services
Ensure that the database for storing application data is properly set up and configured before starting the service.
## Dependencies
The Application Manager relies on several other FlowX services and components:
* [**Database instance**](#database-configuration) - To store application details, manifests, and configurations
* [**Authorization & Access Management**](#configuring-authorization-and-access-roles)
* [**Kafka Event Bus**](#configuring-kafka)
* **Proxy Mechanism** - Ensure resource endpoints are accessible through the application-proxy.
## Configuration
### Environment variables
* `APP_MANAGER_DB_URL`- Connection string for the relational database
* `APP_MANAGER_DB_USER` - Username for the database
* `APP_MANAGER_DB_PASSWORD` - Password for the database
* `APP_MANAGER_DB_NAME` - Database name
#### Configuring authorization and access roles
To integrate the Application Manager with the identity management system for authorization, set the following environment variables:
* `SECURITY_OAUTH2_BASE_SERVER_URL` - Base URL for the OAuth 2.0 Authorization Server
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID` - Unique identifier for the client application registered with the OAuth 2.0 server
* `SECURITY_OAUTH2_CLIENT_CLIENT_SECRET` - Secret key for authenticating requests made by the authorization client
* `SECURITY_OAUTH2_REALM` - The realm name for OAuth2 authentication
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_ID` - Client ID for the application manager service account
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET` - Client Secret for the application manager service account
Refer to the dedicated section for configuring user roles and access rights:
### Database configuration
Configure the data sources for PostgreSQL and MongoDB as follows:
#### PostgreSQL (application data)
* `SPRING_DATASOURCE_URL` - Database URL for PostgreSQL
* `SPRING_DATASOURCE_USERNAME` - Username for PostgreSQL
* `SPRING_DATASOURCE_PASSWORD` - Password for PostgreSQL
* `SPRING_DATASOURCE_DRIVER_CLASS_NAME` - Driver class for PostgreSQL
Note: The same container image and Helm chart are used for both [**Application Manager**](./runtime-manager) and Runtime Manager. Be sure to review the [**deployment guidelines**](../../release-notes/v4.5.1-november-2024/deployment-guidelines-v4.5.1#component-versions) in the release notes to verify compatibility and check the correct version.
#### Configuring MongoDB (runtime database - additional data)
The Application Manager requires MongoDB to store runtime build information. Use the following environment variables for configuration:
* `SPRING_DATA_MONGODB_URI` - URI for connecting to the MongoDB instance -> to connect to `app-runtime` database
* Format: `mongodb://${DB_USERNAME}:${DB_PASSWORD}@,,:/?retryWrites=false`
* `DB_USERNAME` : `app-runtime`
* `SPRING_DATA_MONGODB_STORAGE` - Specifies the storage type used for the Runtime MongoDB instance (Azure environments only)
* **Possible Values:** `mongodb`, `cosmosdb`
* **Default Value:** `mongodb`
### Configuring Redis
If caching is required, configure Redis using the following environment variables:
* `SPRING_REDIS_HOST` - Hostname or IP address of the Redis server
* `SPRING_REDIS_PASSWORD` - Password for authenticating with the Redis server
### Configuring Kafka
The Application Manager uses Kafka for event-driven operations. Set up the Kafka configuration using the following environment variables:
* `SPRING_KAFKA_BOOTSTRAP_SERVERS` - Address of the Kafka server, formatted as "host"
* `SPRING_KAFKA_CONSUMER_GROUP_ID` - Consumer group ID for Kafka topics
* `KAFKA_CONSUMER_THREADS` - Number of Kafka consumer threads
The Application Manager uses a structured naming convention for Kafka topics, designed to support flexibility through pre-defined components in the topic structure. These components, while not directly configurable, allow optional modifications when the desired version cannot be obtained through `$package . $environment . $separator . $version`.
Each topic adheres to a consistent naming schema for streamlined communication across environments and versions.
#### Topic naming components
| Component | Description | Default Value |
| ------------- | -------------------------------------------------- | ---------------------------------------------------------------- |
| `package` | Package identifier for namespace | `ai.flowx.` |
| `environment` | Environment identifier | `dev.` |
| `version` | Version identifier for topic compatibility | `.v1` |
| `separator` | Primary separator for components | `.` |
| `separator2` | Secondary separator for additional distinction | `-` |
| `prefix` | Combines package and environment as a topic prefix | `${kafka.topic.naming.package}${kafka.topic.naming.environment}` |
| `suffix` | Appends version to the end of the topic name | `${kafka.topic.naming.version}` |
#### Application resource topics
* **Resource Export**
* **Pattern:** `${kafka.topic.naming.prefix}application${separator2}version${separator}export${kafka.topic.naming.suffix}`
* **Purpose:** For exporting application resources.
* **Example:** `ai.flowx.application-version.export.v1`
* **Resource Import**
* **Pattern:** `${kafka.topic.naming.prefix}application${separator2}version${separator}import${kafka.topic.naming.suffix}`
* **Purpose:** For importing application resources.
* **Example:** `ai.flowx.application-version.import.v1`
#### Build resource topics
* **Build Export**
* **Pattern:** `${kafka.topic.naming.prefix}build${separator}export${kafka.topic.naming.suffix}`
* **Purpose:** For exporting build resources.
* **Example:** `ai.flowx.build.export.v1`
* **Build Import**
* **Pattern:** `${kafka.topic.naming.prefix}build${separator}import${kafka.topic.naming.suffix}`
* **Purpose:** For importing build resources.
* **Example:** `ai.flowx.build.import.v1`
#### Process topics
* **Start for Event**
* **Pattern:** `${kafka.topic.naming.prefix}core${separator}trigger${separator}start${separator2}for${separator2}event${separator}process${kafka.topic.naming.suffix}`
* **Purpose:** For triggering process start events.
* **Example:** `ai.flowx.core.trigger.start-for-event.process.v1`
* **Scheduled Timer Events**
* **Set Timer Schedule**
* **Pattern:** `${kafka.topic.naming.prefix}core${separator}trigger${separator}set${separator}timer${separator2}event${separator2}schedule${kafka.topic.naming.suffix}`
* **Purpose:** For setting scheduled timer events.
* **Example:** `ai.flowx.core.trigger.set.timer-event-schedule.v1`
* **Stop Timer Schedule**
* **Pattern:** `${kafka.topic.naming.prefix}core${separator}trigger${separator}stop${separator}timer${separator2}event${separator2}schedule${kafka.topic.naming.suffix}`
* **Purpose:** For stopping scheduled timer events.
* **Example:** `ai.flowx.core.trigger.stop.timer-event-schedule.v1`
#### Audit topics
* **Audit Output**
* **Pattern:** `${kafka.topic.naming.prefix}core${separator}trigger${separator}save${separator}audit${kafka.topic.naming.suffix}`
* **Purpose:** For sending audit-related events.
* **Example:** `ai.flowx.core.trigger.save.audit.v1`
These Kafka topics use predefined naming conventions for ease of use. Optional adjustments may be made if the desired topic name cannot be achieved with the `$package . $environment . $separator . $version` structure.
### Configuring resource proxy
The Resource Proxy module forwards resource-related requests to appropriate services, handling CRUD operations on the manifest. It requires proper configuration of proxy endpoints:
* **RESOURCE\_PROXY\_MANIFEST\_URL** - URL for managing the application manifest
* **RESOURCE\_PROXY\_TARGET\_URL** - URL for forwarding resource-related requests to their respective services
### Configuring logging
To control the logging levels for the Application Manager, use the following environment variables:
* **LOGGING\_LEVEL\_ROOT** - Log level for the root service logs
* **LOGGING\_LEVEL\_APP** - Log level for application-level logs
* **LOGGING\_LEVEL\_DB** - Log level for database interactions
### Configuring file storage
If the Application Manager requires file storage for resources or builds, configure S3-compatible storage using the following environment variables:
* **APPLICATION\_FILE\_STORAGE\_S3\_URL** - URL of the S3-compatible storage server
* **APPLICATION\_FILE\_STORAGE\_S3\_BUCKET\_NAME** - S3 bucket name for storing application files
* **APPLICATION\_FILE\_STORAGE\_S3\_ACCESS\_KEY** - Access key for S3 storage
* **APPLICATION\_FILE\_STORAGE\_S3\_SECRET\_KEY** - Secret key for S3 storage
### Data model overview
The Application Manager stores application data using a relational database schema, with key entities such as application, application\_version, and application\_manifest. Below are descriptions of primary entities:
* **Application** - Defines an application with its details like name, type, and metadata.
* **Application Branch** - Represents branches for versioning within an application.
* **Application Version** - Keeps track of each version of an application, including committed and WIP statuses.
* **Application Manifest** - Contains the list of resources associated with a specific application version.
### Monitoring and maintenance
To monitor the performance and health of the Application Manager, use tools like Prometheus or Grafana. Configure Prometheus metrics with the following environment variable:
* **MANAGEMENT\_PROMETHEUS\_METRICS\_EXPORT\_ENABLED** - Enables or disables Prometheus metrics export (default: false).
### Ingress configuration
Configure ingress to control external access to Application Manager:
```yaml
ingress:
enabled: true
public:
enabled: false
admin:
enabled: true
hostname: "{{ .Values.flowx.ingress.admin }}"
path: /appmanager(/|$)(.*)
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/cors-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,flowx-platform
```
> Note: Replace placeholders with actual values for your environment before starting the service.
# Audit setup
This guide will walk you through the process of setting up the Audit service and configuring it to meet your needs.
## Infrastructure prerequisites
The Audit service requires the following components to be set up before it can be started:
* **Docker engine** - version 17.06 or higher
* **Kafka** - version 2.8 or higher
* **Elasticsearch** - version 7.11.0 or higher
## Dependencies
The Audit service is built as a Docker image and runs on top of Kafka and Elasticsearch. Therefore, these services must be set up and running before starting the Audit service.
* [**Kafka configuration**](./setup-guides-overview#kafka)
* [**Authorization & access roles**](./setup-guides-overview#authorization--access-roles)
* [**Elasticsearch**](#configuring-elasticsearch)
* [**Logging**](./setup-guides-overview#logging)
## Configuration
### Configuring Kafka
To configure the Kafka server for the Audit service, set the following environment variables:
* `SPRING_KAFKA_BOOTSTRAP_SERVERS` - address of the Kafka server, it should be in the format "host:port"
* `SPRING_KAFKA_CONSUMER_GROUP_ID` - the consumer group ID to be used for the audit logs
* `KAFKA_CONSUMER_THREADS` - the number of Kafka consumer threads to be used for processing audit logs
* `KAFKA_TOPIC_AUDIT_IN` - the topic key for receiving audit logs
### Configuring Elasticsearch
To configure Elasticsearch, set the following environment variables:
* `SPRING_ELASTICSEARCH_REST_URIS` - the URL(s) of one or more Elasticsearch nodes to connect to (no protocol needed)
* `SPRING_ELASTICSEARCH_REST_DISABLESSL` - a boolean value that determines whether SSL should be disabled for Elasticsearch connections
* `SPRING_ELASTICSEARCH_REST_USERNAME` - the username to use for basic authentication when connecting to Elasticsearch
* `SPRING_ELASTICSEARCH_REST_PASSWORD` - the password to use for basic authentication when connecting to Elasticsearch
* `SPRING_ELASTICSEARCH_INDEX_SETTINGS_DATASTREAM` - (used if ES is used across all dev environments) - the index settings for the datastreams that will be created in Elasticsearch
### Configuring logging
To control the log levels, set the following environment variables:
* `LOGGING_LEVEL_ROOT` - the log level for the root spring boot microservice logs
* `LOGGING_LEVEL_APP` - the log level for app-level logs
Make sure to overwrite the placeholders (where needed) with the appropriate values before starting the service.
# FlowX CMS setup
The CMS service is a microservice that allows managing taxonomies and contents. It is available as a Docker image and is designed to make it easy to edit and analyze content. This guide will walk you through the process of setting up the service and configuring it to meet your needs.
## Infrastructure prerequisites
The CMS service requires the following components to be set up before it can be started:
* **Docker engine** - version 17.06 or higher
* **MongoDB** - version 4.4 or higher for storing taxonomies and contents
* **Redis** - version 6.0 or higher
* **Kafka** - version 2.8 or higher
* **Elastisearch** - version 7.11.0 or higher
The service comes with most of the needed configuration properties filled in, but there are a few that need to be set up using some custom environment variables.
## Dependencies
* [**DB instance**](#mongodb-database)
* [**Authorization & access roles**](#configuring-authorization-access-roles)
* [**Redis**](#configuring-redis)
* [**Kafka**](#configuring-kafka)
## Configuration
Configure the default application name to be used when retrieving content:
```yaml
application:
defaultApplication: ${DEFAULT_APPLICATION:flowx}
```
If this configuration is not provided, the default value will be set to `flowx`.
### Configuring authorization & access roles
To connect to the identity management platform, the following variables need to be set:
* `SECURITY_OAUTH2_BASE_SERVER_URL` - the base URL for the OAuth 2.0 Authorization Server, which is responsible for authentication and authorization for clients and users, it is used to authorize clients, as well as to issue and validate access tokens
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID` - a unique identifier for a client application that is registered with the OAuth 2.0 Authorization Server, this is used to authenticate the client application when it attempts to access resources on behalf of a user
* `SECURITY_OAUTH2_CLIENT_CLIENT_SECRET` - secret key that is used to authenticate requests made by an authorization client
* `SECURITY_OAUTH2_REALM` - security configuration env var in the Spring Security OAuth2 framework, it is used to specify the realm name used when authenticating with OAuth2 provider
To configure user roles and access rights tailored to your specific use case in FlowX Designer, please refer to the following section:
### Configuring MongoDB
The MongoDB database is used for storing taxonomies and contents in the CMS service. Configure MongoDB with the following environment variables:
* `SPRING_DATA_MONGODB_URI` - URI for connecting to the CMS MongoDB instance
* Format: `mongodb://${DB_USERNAME}:${DB_PASSWORD}@,,:/?retryWrites=false`
* `DB_USERNAME` : `cms-core`
* `MONGOCK_TRANSACTIONENABLED` - Enables or disables transactions in MongoDB for compatibility with the Mongock library
* **Default Value:** `false` (Set it to `false` to support successful migrations)
* **Note:** Set to `false` due to known issues with transactions in MongoDB version 5.
#### Configuring MongoDB (runtime database - additional data)
The CMS service also requires connection to a Runtime MongoDB instance for managing additional data related to runtime operations. Use the following environment variables for configuration:
* `SPRING_DATA_MONGODB_RUNTIME_ENABLED` - Enables Runtime MongoDB usage
* **Default Value:** `true`
* `RUNTIME_DB_USERNAME`: `app-runtime`
* `SPRING_DATA_MONGODB_RUNTIME_URI` - URI for connecting to the Runtime MongoDB instance (`app-runtime`)
* Format: `mongodb://${DB_USERNAME}:${DB_PASSWORD}@,,:/?retryWrites=false`
* `SPRING_DATA_MONGODB_STORAGE` - Specifies the storage type used for the Runtime MongoDB instance (Azure environments only)
* **Possible Values:** `mongodb`, `cosmosdb`
* **Default Value:** `mongodb`
### Configuring Redis
The service can use the [**Redis component**](./setup-guides-overview#redis-configuration) already deployed for the engine.
The following values should be set with the corresponding Redis-related values:
* `SPRING_REDIS_HOST` - environment variable used to configure the hostname or IP address of a Redis server when using Spring Data Redis
* `SPRING_REDIS_PASSWORD` - environment variable is used to store the password used to authenticate with a Redis server, it is used to secure access to the Redis server and should be kept confidential
* `REDIS_TTL` - environment variable is used to specify the maximum time-to-live (TTL) for a key in Redis, it is used to set a limit on how long a key can exist before it is automatically expired (Redis will delete the key after the specified TTL has expired)
All the data produced by the service will be stored in Redis under a specific key. The name of the key can be configured using the environment variable:
### Configuring Kafka
To configure the Kafka server, you need to set the following environment variables:
* `SPRING_KAFKA_BOOTSTRAP_SERVERS` - address of the Kafka server, it should be in the format "host:port"
* `SPRING_KAFKA_CONSUMER_GROUP_ID` - a group of consumers
* `KAFKA_CONSUMER_THREADS` - the number of Kafka consumer threads
* `KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL` - the interval between retries after `AuthorizationException` is thrown by `KafkaConsumer`
* `KAFKA_TOPIC_AUDIT_OUT` - the topic key for receiving audit logs
#### Request content
| Environment Variable | Default FLOWX.AI value (Customizable) |
| ----------------------------------- | ------------------------------------------------------------------ |
| KAFKA\_TOPIC\_REQUEST\_CONTENT\_IN | ai.flowx.dev.plugin.cms.trigger.retrieve.content.v1 |
| KAFKA\_TOPIC\_REQUEST\_CONTENT\_OUT | ai.flowx.dev.engine.receive.plugin.cms.retrieve.content.results.v1 |
* `KAFKA_TOPIC_REQUEST_CONTENT_IN`: This variable defines the topic used by the CMS to listen for incoming content retrieval requests.
* `KAFKA_TOPIC_REQUEST_CONTENT_OUT`: This variable defines the topic where the CMS sends the results of content retrieval requests back to the FlowX Engine.
Each action available in the service corresponds to a Kafka event. A separate Kafka topic must be configured for each use case.
It is important to note that all the actions that start with a configured pattern will be consumed by the engine.
### Configuring logging
To control the log levels, the following environment variables can be set:
* `LOGGING_LEVEL_ROOT` - the log level for the root spring boot microservice logs
* `LOGGING_LEVEL_APP` - the log level for app-level logs
* `LOGGING_LEVEL_MONGO_DRIVER` - the log level for mongo driver
### Configuring file storage
* `APPLICATION_FILE_STORAGE_S3_SERVER_URL` - environment variable used to store the URL of the S3 server that is used to store files for the application.
* `APPLICATION_FILE_STORAGE_S3_BUCKET_NAME` - environment variable used to store the name of the S3 bucket that is used to store files for the application
* `APPLICATION_FILE_STORAGE_S3_ROOT_DIRECTORY` - environment variable used to store the root directory within the S3 bucket where the files for the application are stored
* `APPLICATION_FILE_STORAGE_S3_CREATE_BUCKET` - environment variable used to indicate whether the S3 bucket should be created if it does not already exist, it can be set to true or false
* `APPLICATION_FILE_STORAGE_S3_PUBLIC_URL` - the public URL of the S3 solution, it specifies the URL that can be used to access the files stored
### Configuring the maximum file size for uploads
To set the maximum file size for uploads through the CMS service (e.g., the Media Library), you can adjust the following environment variables:
* `SPRING_SERVLET_MULTIPART_MAX_FILE_SIZE: ${MULTIPART_MAX_FILE_SIZE:50MB}`: Defines the maximum file size allowed for uploads. Default is 50 MB.
* `SPRING_SERVLET_MULTIPART_MAX_REQUEST_SIZE: ${MULTIPART_MAX_REQUEST_SIZE:50MB}`: Defines the maximum request size allowed for uploads. Default is 50 MB.
Please note that raising the file size to a high limit may increase vulnerability to potential attacks. Consider carefully before making this change.
### Configuring application management
The FlowX helm chart provides a management service with the necessary parameters to integrate with the Prometheus operator. However, this integration is disabled by default.
#### Prometheus metrics export configuration
Old configuration from \< v4.1 releases (will be deprecated in v4.5):
* `MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED`: Enables or disables Prometheus metrics export.
New configuration, starting from v4.1 release, available below. Note that this setup is backwards compatible, it does not affect the configuration from v3.4.x. The configuration files will still work until v4.5 release.
To configure Prometheus metrics export for the FlowX.AI Engine, the following environment variable is required:
| Environment Variable | Description | Default Value | Possible Values |
| ---------------------------------------------- | ---------------------------------------------- | ------------- | --------------- |
| `MANAGEMENT_PROMETHEUS_METRICS_EXPORT_ENABLED` | Enables or disables Prometheus metrics export. | `false` | `true`, `false` |
# Data-sync job setup guide
This guide provides essential environment variables for configuring the Data-Sync Job. You should use these environment variables to set up and run the Job in your Kubernetes environment.
# Overview
The Data-Sync Job is designed to synchronize and transfer data across multiple databases, ensuring data consistency and up-to-date information across all connected systems.
It operates by connecting to multiple databases, retrieves data, and synchronizes changes across them. The job logs its actions and can be scheduled to run regularly, keeping all databases in sync and up-to-date.
# Data-sync job setup guide
This guide details the essential environment variables required to configure and run the Data-Sync Job for synchronizing data across various databases.
## Required environment variables
### MongoDB connections
#### CMS database
* `FLOWX_DATASOURCE_CMS_URI` - MongoDB URI for CMS database.
* `CMS_MONGO_USERNAME` - Username for MongoDB CMS database.
* `CMS_MONGO_DATABASE` - Database name for CMS.
#### Scheduler database
* `FLOWX_DATASOURCE_SCHEDULER_URI` - MongoDB URI for Scheduler database.
* `SCHEDULER_MONGO_USERNAME` - Username for MongoDB Scheduler database.
* `SCHEDULER_MONGO_DATABASE` - Database name for Scheduler.
#### Task Manager database
* `FLOWX_DATASOURCE_TASKMANAGER_URI` - MongoDB URI for Task Manager database.
* `TASKMANAGER_MONGO_USERNAME` - Username for MongoDB Task Manager database.
* `TASKMANAGER_MONGO_DATABASE` - Database name for Task Manager.
### PostgreSQL connections
#### Process Engine database
* `FLOWX_DATASOURCE_ENGINE_URL` - PostgreSQL URL for Process Engine database.
* `FLOWX_DATASOURCE_ENGINE_USERNAME` - Username for PostgreSQL Process Engine database.
#### Application Manager database
* `FLOWX_DATASOURCE_APPMANAGER_URL` - PostgreSQL URL for Application Manager database.
* `FLOWX_DATASOURCE_APPMANAGER_USERNAME` - Username for PostgreSQL Application Manager database.
## Deployment
To deploy the Data-Sync Job, apply the YAML configuration with the required environment variables:
```bash
kubectl apply -f data-sync-job.yaml
```
Monitor the Job status and logs as needed to ensure successful execution.
# FlowX Designer setup
To set up FlowX Designer in your environment, follow this guide.
## Prerequisites Management
### NGINX
For optimal operation the FlowX.AI Designer should use a separate [NGINX](../docs/platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-nginx) load balancer from the **FlowX Engine**. This routing mechanism handles API calls from the [SPA](./designer-setup-guide#for-configuring-the-spa) (single page application) to the backend service, to the engine and to various plugins.
Here's an example/suggestion of an NGINX setup:
#### For routing calls to plugins:
```jsx
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: GET, PUT, POST, DELETE, PATCH
nginx.ingress.kubernetes.io/cors-allow-origin: "http://localhost:4200,http://localhost:80,http://localhost:8080"
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: flowx-admin-plugins-subpaths
spec:
rules:
- host: {{host}}
http:
paths:
- path: /notification(/|$)(.*)
backend:
serviceName: notification
servicePort: 80
- path: /document(/|$)(.*)
backend:
serviceName: document
servicePort: 80
tls:
- hosts:
- {{host}}
secretName: {{tls secret}}
```
#### For routing calls to the engine
Three different configurations are needed:
1. For viewing the current instances of processes running in the Engine:
```jsx
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /api/instances/$2
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: GET, PUT, POST, DELETE, PATCH
nginx.ingress.kubernetes.io/cors-allow-origin: "http://localhost:4200,http://localhost:80,http://localhost:8080"
name: flowx-admin-engine-instances
spec:
rules:
- host: {{host}}
http:
paths:
- path: /api/instances(/|$)(.*)
backend:
serviceName: {{engine-service-name}}
servicePort: 80
```
2. For testing process definitions from the FLOWX Designer, route API calls and SSE communication to the Engine backend.
Setup for routing REST calls:
```jsx
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /api/$2
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: GET, PUT, POST, DELETE, PATCH
nginx.ingress.kubernetes.io/cors-allow-origin: "http://localhost:4200,http://localhost:80,http://localhost:8080"
name: flowx-admin-engine-rest-api
spec:
rules:
- host: {{host}}
http:
paths:
- path: /{{PROCESS_API_PATH}}/api(/|$)(.*)
backend:
serviceName: {{engine-service-name}}
servicePort: 80
```
Setup for routing SSE communication:
```jsx
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/cors-allow-headers: ""
name: flowx-public-subpath-events-rewrite
spec:
rules:
- host: {{host}}
http:
paths:
- backend:
service:
name: events-gateway
port:
name: http
path: /api/events(/|$)(.*)
```
3. For accessing the REST API of the backend microservice
```jsx
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "4m"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: GET, PUT, POST, DELETE, PATCH
nginx.ingress.kubernetes.io/cors-allow-origin: "http://localhost:4200,http://localhost:80,http://localhost:8080"
name: flowx-admin-api
spec:
rules:
- host: {{host}}
http:
paths:
- path: /
backend:
serviceName: {{flowx-admin-service-name}}
servicePort: 80
tls:
- hosts:
- {{host}}
secretName: {{tls secret}}
```
#### For configuring the SPA
```jsx
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/affinity: cookie
name: flowx-designer-spa
spec:
rules:
- host: {{host of web app}}
http:
paths:
- backend:
serviceName: {{flowx-designer-service-name}}
servicePort: 80
tls:
- hosts:
- {{host of web app}}
secretName: {{tls secret}}
```
## Steps to deploy Frontend app
The FlowX.AI Designer is an SPA application that is packaged in a docker image with `nginx:1.19.10`. The web application allows an authenticated user to administrate the FLOWX platform.
In order to configure the docker image you need to configure the next parameters:
```jsx
flowx-process-renderer:
env:
BASE_API_URL: {{the one configured as host in the nginx}}
PROCESS_API_PATH: {{something like /engine}}
KEYCLOAK_ISSUER: {{openid provider - ex: https://something/auth/realms/realmName}}
KEYCLOAK_REDIRECT_URI: {{url of the SPA}}
KEYCLOAK_CLIENT_ID: {{client ID}}
STATIC_ASSETS_PATH: {{mediaLibrary.s3.publicUrl }}/{{env}}
```
# FlowX Events Gateway setup
This guide will walk you through the process of setting up the events-gateway service.
## Infrastructure prerequisites
Before proceeding with the setup, ensure that the following components have been set up:
* **Redis** - version 6.0 or higher
* **Kafka** - version 2.8 or higher
## Dependencies
* **Kafka** - used for event communication
* **Redis** - used for caching
## Configuration
### Configuring Kafka
Set the following Kafka-related configurations using environment variables:
* `SPRING_KAFKA_BOOTSTRAP_SERVERS` - the address of the Kafka server, it should be in the format "host:port"
#### Groupd IDs
The configuration parameters "KAFKA\_CONSUMER\_GROUP\_ID\_\*" are used to set the consumer group name for Kafka consumers that consume messages from topics. Consumer groups in Kafka allow for parallel message processing by distributing the workload among multiple consumer instances. By configuring the consumer group ID, you can specify the logical grouping of consumers that work together to process messages from the same topic, enabling scalable and fault-tolerant message consumption in your Kafka application.
| Configuration Parameter | Default value | Description |
| ------------------------------------------------------------ | ---------------------------- | -------------------------------------------------------------------- |
| `KAFKA_CONSUMER_GROUP_ID_PROCESS_ENGINE_COMMANDS_MESSAGE` | `engine-commands-message` | Consumer group ID for processing engine commands messages |
| `KAFKA_CONSUMER_GROUP_ID_PROCESS_ENGINE_COMMANDS_DISCONNECT` | `engine-commands-disconnect` | Consumer group ID for processing engine commands disconnect messages |
| `KAFKA_CONSUMER_GROUP_ID_PROCESS_ENGINE_COMMANDS_CONNECT` | `engine-commands-connect` | Consumer group ID for processing engine commands connect messages |
| `KAFKA_CONSUMER_GROUP_ID_PROCESS_TASK_COMMANDS` | `task-commands-message` | Consumer group ID for processing task commands |
#### Threads
The configuration parameters "KAFKA\_CONSUMER\_THREADS\_\*" are utilized to specify the number of threads assigned to Kafka consumers for processing messages from topics. These parameters allow you to fine-tune the concurrency and parallelism of your Kafka consumer application, enabling efficient and scalable message consumption from Kafka topics.
| Configuration Parameter | Default value | Description |
| ----------------------------------------------------------- | ------------- | ---------------------------------------------------------------------------------------- |
| `KAFKA_CONSUMER_THREADS_PROCESS_ENGINE_COMMANDS_MESSAGE` | 10 | Number of threads for processing engine commands messages |
| `KAFKA_CONSUMER_THREADS_PROCESS_ENGINE_COMMANDS_DISCONNECT` | 5 | Number of threads for processing engine commands disconnect messages |
| `KAFKA_CONSUMER_THREADS_PROCESS_ENGINE_COMMANDS_CONNECT` | 5 | Number of threads for processing engine commands connect messages |
| `KAFKA_CONSUMER_THREADS_TASK_COMMANDS` | 10 | Number of threads for task commands |
| `KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL` | 10 | Interval between retries after an AuthorizationException is thrown by the Kafka consumer |
#### Kafka topics related to process instances
| Configuration Parameter | Default value |
| ----------------------------------------------------------- | ---------------------------------------------------------- |
| `KAFKA_TOPIC_EVENTS_GATEWAY_PROCESS_INSTANCE_IN_MESSAGE` | `ai.flowx.dev.eventsgateway.engine.commands.message.v1` |
| `KAFKA_TOPIC_EVENTS_GATEWAY_PROCESS_INSTANCE_IN_DISCONNECT` | `ai.flowx.dev.eventsgateway.engine.commands.disconnect.v1` |
| `KAFKA_TOPIC_EVENTS_GATEWAY_PROCESS_INSTANCE_IN_CONNECT` | `ai.flowx.dev.eventsgateway.engine.commands.connect.v1` |
#### Kafka topics related to tasks
| Configuration Parameter | Default value |
| -------------------------------------------- | ------------------------------------------------- |
| `KAFKA_TOPIC_EVENTS_GATEWAY_TASK_IN_MESSAGE` | `ai.flowx.eventsgateway.task.commands.message.v1` |
### Configuring authorization & access roles
Set the following environment variables to connect to the identity management platform:
| Configuration Parameter | Description |
| -------------------------------------- | --------------------------------------- |
| `SECURITY_OAUTH2_BASE_SERVER_URL` | Base URL of the OAuth2 server |
| `SECURITY_OAUTH2_CLIENT_CLIENT_ID` | Client ID for OAuth2 authentication |
| `SECURITY_OAUTH2_CLIENT_CLIENT_SECRET` | Client secret for OAuth2 authentication |
| `SECURITY_OAUTH2_REALM` | Realm for OAuth2 authentication |
### Redis
The process engine sends the messages to the events-gateway, which is responsible for sending them to Redis.
| Configuration Parameter | Description |
| ---------------------------- | ---------------------------- |
| `SPRING_DATA_REDIS_HOST` | Hostname of the Redis server |
| `SPRING_DATA_REDIS_PASSWORD` | Password for Redis server |
| `SPRING_DATA_REDIS_PORT` | Connect to the Redis server |
#### Master replica
The events-gateway can be configured to communicate with Redis using the MASTER\_REPLICA replication mode by configuring the following property:
`spring.data.redis.sentinel.nodes: replica1, replica2, replica3`, etc...
Correspondent environment variable:
* `SPRING_DATA_REDIS_SENTINEL_NODES`
##### Example
```makefile
spring.redis.sentinel.nodes=host1:26379,host2:26379,host3:26379
```
In the above example, the Spring Boot application will connect to three Redis Sentinel nodes: host1:26379, host2:26379, and host3:26379.
The property value should be a comma-separated list of host:port pairs, where each pair represents the hostname or IP address and the port number of a Redis Sentinel node.
By default, Redis is standalone, so the configuration with `redis-replicas` is optional for high load use cases.
In the context of Spring Boot and Redis Sentinel integration, the `spring.redis.sentinel.nodes` property is used to specify the list of Redis Sentinel nodes that the Spring application should connect to. These nodes are responsible for monitoring and managing Redis instances.
### Configuring logging
The following environment variables could be set in order to control log levels:
| Configuration Parameter | Description |
| ----------------------- | -------------------------------------------------------- |
| `LOGGING_LEVEL_ROOT` | Logging level for the root Spring Boot microservice logs |
| `LOGGING_LEVEL_APP` | Logging level for the application-level logs |
# Configuring access roles for processes
## Access to a process definition
Setting up user role-based access on process definitions is done by configuring swimlanes on the process definition.
By default, all process nodes belong to the same swimlane. If more swimlanes are needed, they can be edited in the process definition settings panel.
Swimlane role settings apply to the whole process, the process nodes or the actions to be performed on the nodes.
First, the desired user roles need to be configured in the identity provider solution and users must be assigned the correct roles.
You can use the **Access management** tab under **General Settings** to administrate all the roles.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/access_management_roles.png)
To be able to access the roles defined in the identity provider solution, a [**service account**](../access-management/configuring-an-iam-solution#process-engine-service-account) with appropriate permissions needs to be added in the identity provider. And the details of that service account [**need to be set up in the platform configuration**](../../setup-guides//designer-setup-guide#authorization--access-roles).
The defined roles will then be available to be used in the process definition settings (**Permissions** tab) panel for configuring swimlane access.
A **Default** swimlane comes with two default permissions assigned based on a specific role.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/swimlane_default_roles.png)
* **execute** - the user will be able to start process instances and run actions on them
* **self-assign** - the user can assign a process instance to them and start working on it
This is valid for **> 2.11.0** FLOWX.AI platform release.
Other **Permissions** can be added manually, depending on the needs of the user. Some permissions are needed to be configured so you can use features inside [Task Management](/4.0/docs/platform-deep-dive/plugins/custom-plugins/task-management/task-management-overview) plugin. Specific roles need to be assigned separately on a few available process operations. These are:
* **view** - the user will be able to view process instance data
* **assign** - user can assign tasks to other users (this operation is only accessible through the **Task management** plugin)
* **unassign** - user can unassign tasks from other users (this operation is only accessible through the **Task management** plugin)
* **hold** - user can mark the process instance as on hold (this operation is only accessible through the **Task management** plugin)
* **unhold** - user can mark the process instance as not on hold (this operation is only accessible through the **Task management** plugin)
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/process_permissions.png)
**\< 2.11.0 platform release** - if no role is configured on an operation, no restrictions will be applied.
## Configuration examples
Valid for \< 2.11.0 release version.
### Regular user
Below you can find an example of configuration of roles for a regular user:
![example configuration of roles for a regular user](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/regular_user_roles.png)
### Admin
Below you can find an example of configuration of roles for an admin user:
![example configuration of roles for an admin user](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/admin_user_roles.png)
Starting with [**2.11.0**](https://old-docs.flowx.ai/release-notes/v2.11.0-august-2022/) release, specific roles are needed, otherwise, restrictions will be applied.
After setting up your preferred identity provider solution, you will need to add the desired access roles in the application configuration for the FLOWX Engine (using environment variables):
[Authorization & access roles](./engine-setup#authorization-and-access-roles)
## Restricting process instance access based on business filters
[Business filters](/4.0/docs/platform-deep-dive/user-roles-management/business-filters)
Before they can be used in the process definition the business filter attributes need to be set in the identity management platform. They have to be configured as a list of filters and should be made available on the authorization token. Application users will also have to be assigned this value.
## Viewing processes instances
Active process instances and their related data can be viewed from the FLOWX Designer. A user needs to be assigned to a specific role in the identity provider solution to be able to view this information.
By default, this role is named `FLOWX_ROLE`, but its name can be changed from the application configuration of the Engine by setting the following environment variable:
`FLOWX_PROCESS_DEFAULTROLES`
When viewing process instance-related data, it can be configured whether to hide specific sensitive user data. This can be configured using the `FLOWX_DATA_ANONYMIZATION` environment variable.
## Starting processes
The`FLOWX_ROLE` is also used to grant permissions for starting processes.
## Access to REST API
To restrict API calls by user role, you will need to add the user roles in the application config:
```yaml
security:
pathAuthorizations:
-
path: "/api/**"
rolesAllowed: "ANY_AUTHENTICATED_USER" or "USER_ROLE_FROM_IDENTITY_PROVIDER"
```
# Configuring Elasticsearch indexing
This section provides configuration steps for enabling process instance indexing using the Kafka transport strategy.
Before proceeding, it is recommended to familiarize yourself with Elasticsearch and its indexing process by referring to the Intro to Elasticsearch section.
## Configuration updates
To enable Kafka indexing strategy, the previous configuration parameter `flowx.use-elasticsearch` is being replaced. However, to ensure backward compatibility, it will still be preserved in the configuration. Below is an example of how to configure it:
```yaml
spring:
elasticsearch:
index-settings:
name: process_instance
shards: 1
replicas: 1
```
Instead of the removed configuration, a new configuration area has been added:
```yaml
flowx:
indexing:
enabled: true // true | false - specifies if the indexing with Elasticsearch for the whole app is enabled or disabled.
processInstance: // set of configurations for indexing process instances. Can be duplicated for other objects.
indexing-type: kafka // no-indexing | http | kafka - the chosen indexing strategy.
index-name: process_instance // the index name that is part of the search pattern.
shards: 1
replicas: 1
```
The `flowx.indexing.enabled` property determines whether indexing with Elasticsearch is enabled. When set to false or missing, no indexing will be performed for any entities defined below. When set to true, indexing with Elasticsearch is enabled.
If the FlowX indexing configuration is set to false, the following configuration information and guidelines are not applicable to your use case.
The `flowx.indexing.processInstance.indexing-type` property defines the indexing strategy for process instances. It can have one of the following values:
* **no-indexing**: No indexing will be performed for process instances.
* **http**: Direct connection from the process engine to Elasticsearch through HTTP calls.
* **kafka**: Data will be sent to be indexed via a Kafka topic using the new strategy. To implement this strategy, the Kafka Connect with Elasticsearch Sink Connector must be deployed in the infrastructure.
## Configuration steps
To enable indexing with Elasticsearch for the entire application, update the process-engine configuration with the following parameters:
* `FLOWX_INDEXING_ENABLED`: Set this parameter to `true` to enable indexing with Elastisearch for the entire application.
| Variable Name | Enabled | Description |
| ------------------------ | ------- | --------------------------------------------------------- |
| FLOWX\_INDEXING\_ENABLED | true | Indexing with Elasticsearch for the whole app is enabled |
| FLOWX\_INDEXING\_ENABLED | false | Indexing with Elasticsearch for the whole app is disabled |
* `FLOWX_INDEXING_PROCESSINSTANCE_INDEXING_TYPE`: Set this parameter to `kafka` to use the Kafka transport strategy for indexing process instances.
| Variable Name | Indexing Type - Values | Definition |
| ------------------------------------------------ | ---------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_INDEXING\_TYPE | no-indexing | No indexing is performed for process instances |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_INDEXING\_TYPE | http | Process instances are indexed via HTTP (direct connection from process-engine to Elasticsearch thorugh HTTP calls) |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_INDEXING\_TYPE | kafka | Process instances are indexed via Kafka (send data to be indexed through a kafka topic - the new strategy for the applied solution) |
* `FLOWX_INDEXING_PROCESSINSTANCE_INDEX_NAME`: Specify the name of the index used for process instances.
| Variable Name | Values | Definition |
| ------------------------------------------------------- | ----------------- | ----------------------------------------------------------------------------------------------- |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_INDEXING\_INDEX\_NAME | process\_instance | The name of the index used for storing process instances. It is also part of the search pattern |
* `FLOWX_INDEXING_PROCESSINSTANCE_SHARDS`: Set the number of shards for the index.
| Variable Name | Values | Definition |
| ---------------------------------------- | ------ | -------------------------------------------------------------------------- |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_SHARDS | 1 | The number of shards for the Elasticsearch index storing process instances |
* `FLOWX_INDEXING_PROCESSINSTANCE_REPLICAS`: Set the number of replicas for the index.
| Variable Name | Values | Definition |
| ------------------------------------------ | ------ | ---------------------------------------------------------------------------- |
| FLOWX\_INDEXING\_PROCESSINSTANCE\_REPLICAS | 1 | The number of replicas for the Elasticsearch index storing process instances |
For Kafka indexing, the Kafka Connect with Elasticsearch Sink Connector must be deployed in the infrastructure.
[Elasticsearch Service Sink Connector](https://docs.confluent.io/kafka-connectors/elasticsearch/current/overview.html)
## Configuration examples
### Kafka Connect
* Assumes kafka cluster installed with strimzi operator and Elasticsearch with eck-operator
* Can save the image built by kafka connect to a local registry and comment build section
```yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: kafka-connect-kafka-flowx
annotations:
strimzi.io/use-connector-resources: "true"
spec:
version: 3.0.0
replicas: 1
bootstrapServers: kafka-flowx-kafka-bootstrap:9093
tls:
trustedCertificates:
- secretName: kafka-flowx-cluster-ca-cert
certificate: ca.crt
image: ttl.sh/strimzi-connect-ttlsh266-3.0.0:24h
config:
group.id: flowx-kafka-connect
offset.storage.topic: kafka-connect-cluster-offsets
config.storage.topic: kafka-connect-cluster-configs
status.storage.topic: kafka-connect-cluster-status
# -1 means it will use the default replication factor configured in the broker
config.storage.replication.factor: -1
offset.storage.replication.factor: -1
status.storage.replication.factor: -1
topic.creation.enable: true
build:
output:
type: docker
# This image will last only for 24 hours and might be overwritten by other users
# Strimzi will use this tag to push the image. But it will use the digest to pull
# the container image to make sure it pulls exactly the image we just built. So
# it should not happen that you pull someone else's container image. However, we
# recommend changing this to your own container registry or using a different
# image name for any other than demo purposes.
image: ttl.sh/strimzi-connect-ttlsh266-3.0.0:24h
plugins:
- name: kafka-connect-elasticsearch
artifacts:
- type: zip
url: https://d1i4a15mxbxib1.cloudfront.net/api/plugins/confluentinc/kafka-connect-elasticsearch/versions/14.0.6/confluentinc-kafka-connect-elasticsearch-14.0.6.zip
externalConfiguration:
volumes:
- name: elasticsearch-keystore-volume
secret:
secretName: elasticsearch-keystore
env:
- name: SPRING_ELASTICSEARCH_REST_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
```
### Kafka Elasticsearch Connector
```yaml
spec:
class: io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasksMax: 2
config:
tasks.max: "2" # The maximum number of tasks that can be run in parallel for this connector, which is 2 in this case. You can start with 2 and increase in case of huge load, but pay attention to the load that will be produced on Elasticsearch and also to the resources that you allocate to KafkaConnect so that it can support the threads.
topics: "ai.flowx.core.index.process.v1" # Source Kafka topic. Must be the same as the one declared in the process defined as ${kafka.topic.naming.prefix}.core.index.process${kafka.topic.naming.suffix}
key.ignore: "false" # Don't change this value! This tells Kafka Connect (KC) to process the key of the message - it will be used as the ID of the object in Elasticsearch.
schema.ignore: "true" # Don't change this value!!! This tells KC to ignore the mapping from the Kafka message. Elasticsearch will use internal mapping. See below.
connection.url: "https://elasticsearch-es-http:9200" # The URL to Elasticsearch. You should configure this.
connection.username: "elastic" # The username to authenticate with Elasticsearch. You should configure this.
connection.password: "Yyh03ZI66310Hyw59MXcR8xt" # The password to authenticate with Elasticsearch. You should configure this.
elastic.security.protocol: "SSL" # The security protocol to use for connecting to Elasticsearch. You should use SSL if possible.
elastic.https.ssl.keystore.location: "/opt/kafka/external-configuration/elasticsearch-keystore-volume/keystore.jks" # You should configure the path to the keystore where the Elasticsearch key is added.
elastic.https.ssl.keystore.password: "MPx57vkACsRWKVap" # The password for the keystore file. You should configure this.
elastic.https.ssl.key.password: "MPx57vkACsRWKVap" # The password for the key within the keystore file.
elastic.https.ssl.keystore.type: "JKS" # The type of the keystore file. It is set to "JKS" (Java KeyStore).
elastic.https.ssl.truststore.location: "/opt/kafka/external-configuration/elasticsearch-keystore-volume/keystore.jks" # you should configure the path to the keystore where the Elasticsearch key is added.
elastic.https.ssl.truststore.password: "MPx57vkACsRWKVap" # The password for the truststore file. You should configure this
elastic.https.ssl.truststore.type: "JKS" # The type of the truststore file. It is set to "JKS".
elastic.https.ssl.protocol: "TLS" # The SSL/TLS protocol to use for communication. It is set to "TLS".
batch.size: 1000 # The size of the message batch that KC will process. This should be fine as default value. If you experience slowness and want to increase the speed, changing this may help but it will be based on your scenario. Consult the documentation for more details.
linger.ms: 1 # #Start with this value and change it only if needed. Consult the documentation for connector for more details.
read.timeout.ms: 30000 # Increased to 30000 from the default 3000 due to flush.synchronously = true.
flush.synchronously: "true" # Don't change this value! The way of writing to Elasticsearch. It must stay "true" for the router below to work.
drop.invalid.message: "true" # Don't change this value! If set to false, the connector will wait for a configuration that allows processing the message. If set to true, the connector will drop the invalid message.
behavior.on.null.values: "IGNORE" # Don't change this value! Must be set to IGNORE to avoid blocking the processing of null messages.
behavior.on.malformed.documents: "IGNORE" # Don't change this value! Must be IGNORE to avoid blocking the processing of invalid JSONs.
write.method: "UPSERT" # Don't change this value! UPSERT to create or update the index.
type.name: "_doc" # Don't change this value! This is the name of the Elasticsearch type for indexing.
key.converter: "org.apache.kafka.connect.storage.StringConverter" # Don't change this value!
key.converter.schemas.enable: "false" # Don't change this value! No schema defined for the key in the message.
value.converter: "org.apache.kafka.connect.json.JsonConverter" # Don't change this value!
value.converter.schemas.enable: "false" # Don't change this value! No schema defined for the value in the message body.
transforms: "routeTS" # Don't change this value! This represents router that helps create indices dynamically based on the timestamp (process instance start date).
transforms.routeTS.type: "org.apache.kafka.connect.transforms.TimestampRouter" # Don't change this value! It helps with routing the message to the correct index.
transforms.routeTS.topic.format: "process_instance-${timestamp}" # You should configure this. It is important that this value must start with the value defined in process-engine config: flowx.indexing.processInstance.index-name. The name of the index will start with a prefix ("process_instance-" in this example) and must have the timestamp appended after for dynamically creating indices. For backward compatibility (utilizing the data in the existing index), the prefix must be "process_instance-". However, backward compatibility isn't specifically required here.
transforms.routeTS.timestamp.format: "yyyyMMdd" # This format ensures that the timestamp is represented consistently and can be easily parsed when creating or searching for indices based on the process instance start date. You can change this with the value you want. If you want monthly indexes set it to "yyyyMM". But be aware that once you set it, when you change it, the existing object indexed will not be updated anymore. The update messages will be treated as new objects and indexed again because they are going to be sent to new indexes. This is important! Try to find your index size and stick with it.
```
### HTTP indexing
```yaml
flowx:
indexing:
enabled: true
processInstance:
indexing-type: http
index-name: process_instance
shards: 1
replicas: 1
```
If you don't want to remove the existing configuration parameters, you can use the following example:
```yaml
spring:
elasticsearch:
index-settings:
name: process_instance
shards: 1
replicas: 1
flowx.use-elasticsearch: true
flowx:
indexing:
enabled: ${flowx.use-elasticsearch}
processInstance:
indexing-type: http
index-name: ${spring.elasticsearch.index-settings.name}
shards: ${spring.elasticsearch.index-settings.shards}
replicas: ${spring.elasticsearch.index-settings.replicas}
```
## Querying Elasticsearch
To read from multiple indices, queries in Elasticsearch have been updated. The queries now run against an index pattern that identifies multiple indices instead of a single index. The index pattern is derived from the value defined in the configuration property:
`flowx.indexing.processInstance.index-name`
## Kafka topics - process events messages
This topic is used for sending the data to be indexed from Process engine. The data from this topic will be read by Kafka Connect.
* Key: `${kafka.topic.process.index.out}`
* Value: `${kafka.topic.naming.prefix}.core.index.process${kafka.topic.naming.suffix}`
| Default parameter (env var) | Default FLOWX.AI value (can be overwritten) |
| --------------------------------- | ------------------------------------------- |
| KAFKA\_TOPIC\_PROCESS\_INDEX\_OUT | ai.flowx.dev.core.index.process.v1 |
The topic name, defined in the value, will be used by Kafka Connect as source for the messages to be sent to Elasticsearch for indexing.
The attribute `indexLastUpdatedTime` is new and will be populated for the kafka-connect strategy. This will tell the timestamp when the last operation was done on the object in the index.ß
## Elasticsearch update (index template)
The mappings between messages and Elasticsearch data types need to be specified. This is achieved through an index template created by the process engine during startup. The template applies to indices starting with the value defined in `flowx.indexing.processInstance.index-name` config. Here's an example of the index template:
```json
//process_instance_template
{
"index_patterns": ["process_instance*"],
"priority": 300,
"template":
{
"mappings": {
"_doc": {
"properties": {
"_class": {
"type": "keyword",
"index": false,
"doc_values": false
},
"dateStarted": {
"type": "date",
"format": "date_optional_time||epoch_millis"
},
"id": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"indexLastUpdatedTime": {
"type": "date",
"format": "date_optional_time||epoch_millis"
},
"keyIdentifiers": {
"type": "nested",
"include_in_parent": true,
"properties": {
"_class": {
"type": "keyword",
"index": false,
"doc_values": false
},
"key": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"originalValue": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"path": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"value": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
"processDefinitionName": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"state": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
},
"settings":{
"number_of_shards":5, //This value will be overwritten by the value that you set at `FLOWX_INDEXING_PROCESSINSTANCE_SHARDS` environment variable.
"number_of_replicas":1
}
}
}
```
Here are some guidelines to help you get started:
# Indexing config guidelines
The configuration of Elasticsearch for process instances indexing depends on various factors related to the application load, the number of process instances, parallel requests, and indexed keys per process. Although the best approach to sizing and configuring Elasticsearch is through testing and monitoring under load, here are some guidelines to help you get started
## Indexing strategy
* Advantages of Multiple Small Indices:
* Fast indexing process.
* Flexibility in cleaning up old data.
* Potential Drawbacks:
* Hitting the maximum number of shards per node, resulting in exceptions when creating new indices.
* Increased search response time and memory footprint.
* Deletion
* When deleting data in Elasticsearch, it's recommended to delete entire indices instead of individual documents. Creating multiple smaller indices provides the flexibility to delete entire indices of old data that are no longer needed.
Alternatively, you can create fewer indices that span longer periods of time, such as one index per year. This approach offers small search response times but may result in longer indexing times and difficulty in cleaning up and recovering data in case of failure.
## Shard and replica configuration
The solution includes an index template that gets created with the settings from the process-engine app (name, shards, replicas) when running the app for the first time. This template controls the settings and mapping of all newly created indices.
Once an index is created, you cannot update its number of shards and replicas. However, you can update the settings from the index template at runtime in Elasticsearch, and new indices will be created with the updated settings. Note that the mapping should not be altered as it is required by the application.
## Recommendations for resource management
To manage functional indexing operations and resources efficiently, consider the following recommendations:
* [Sizing indexes upon creation](#sizing-indexes-upon-creation)
* [Balancing](#balancing)
* [Delete unneeded indices](#delete-unneeded-indices)
* [Reindex large indices](#reindex-large-indices)
* [Force merge indices](#force-merge-indices)
* [Shrink indices](#shrink-indices)
* [Combine indices](#combine-indices)
#### Sizing indexes upon creation
Recommendations:
* Start with monthly indexes that have 2 shards and 1 replica. This setup is typically sufficient for handling up to 200k process instances per day; ensures a parallel indexing in two main shards and has also 1 replica per each main shard (4 shards in total). This would create 48 shards per year in the Elasticsearch nodes; A lot less than the default 1000 shards, so you will have enough space for other indexes as well.
* If you observe that the indexing gets really, really slow, then you should look at the physical resources / shard size and start adapting the config.
* If you observe that indexing one monthly index gets massive and affects the performance, then think about switching to weekly indices.
* If you have huge spikes of parallel indexing load (even though that depends on the Kafka connect cluster configuration), then think about adding more main shards.
* Consider having at least one replica for high availability. However, keep in mind that the number of replicas is applied to each shard, so creating many replicas may lead to increased resource usage.
* Monitor the number of shards created and estimate when you might reach the maximum shards per node, taking into account the number of nodes in your cluster.
#### Balancing
When configuring index settings, consider the number of nodes in your cluster. The total number of shards (calculated by the formula: primary\_shards\_number \* (replicas\_number +1)) for an index should be directly proportional to the number of nodes. This helps Elasticsearch distribute the load evenly across nodes and avoid overloading a single node. Avoid adding shards and replicas unnecessarily.
#### Delete unneeded indices
Deleting unnecessary indices reduces memory footprint, the number of used shards, and search time.
#### Reindex large indices
If you have large indices, consider reindexing them. Process instance indexing involves multiple updates on an initially indexed process instance, resulting in multiple versions of the same document in the index. Reindexing creates a new index with only the latest version, reducing storage size, memory footprint, and search response time.
#### Force merge indices
If there are indices with no write operations performed anymore, perform force merge to reduce the number of segments in the index. This operation reduces memory footprint and response time. Only perform force merge during off-peak hours when the index is no longer used for writing.
#### Shrink indices
If you have indices with many shards, consider shrinking them using the shrink operation. This reindexes the data into an index with fewer shards. Perform this operation during off-peak hours.
#### Combine indices
If there are indices with no write operations performed anymore (e.g., process\_instance indices older than 6 months), combine these indices into a larger one and delete the smaller ones. Use the reindexing operation during off-peak hours. Ensure that write operations are no longer needed from the FLOWX platform for these indices.
# FlowX Engine setup
This guide provides instructions on how to set up and configure the FlowX.AI Engine to meet specific requirements.
## Infrastructure prerequisites
Before initiating the FlowX.AI Engine, ensure the following infrastructure components are properly installed and configured:
* **Kafka**: Version 2.8 or higher.
* **Elasticsearch**: Version 7.11.0 or higher.
* **PostgreSQL** - version 13 or higher for storing application data (could vary based on your preferred relational database)
* **MongoDB** - version 4.4 or higher for managing runtime builds
## Dependencies
The FlowX Engine interacts with various components critical for its operation:
* **Database**: Primary storage for the engine.
* **Redis Server**: Used for caching purposes. Refer to [Redis Configuration](../setup-guides-overview#redis-configuration).
* **Kafka**: Facilitates messaging and event-driven architecture. Details on [Configuring Kafka](#configuring-kafka).
For a microservices architecture, it’s common for services to manage their data via dedicated databases.
### Required External Services
* **Redis Cluster**: Essential for caching process definitions, compiled scripts, and Kafka responses.
* **Kafka Cluster**: Serves as the communication backbone with external plugins and integration.
## Configuration Setup
FlowX.AI Engine utilizes environment variables for configuration. Below are the key environment variables you need to configure:
* [**Database configuration**]()
* [**Configuring authorization and access roles**](#authorization--access-roles)
* [**Configuring the data source**](../setup-guides-overview#datasource-configuration)
* [**Configuring Redis**](../setup-guides-overview#redis-configuration)
* [**Configuring logging**](../setup-guides-overview#logging)
* [**Configuring Kafka**](#configuring-kafka)
* [**Configuring access roles for processes**](./configuring-access-roles-for-processes)
## Database configuration
### PostgreSQL
* `SPRING_DATASOURCE_URL` - Database URL for PostgreSQL
* `SPRING_DATASOURCE_USERNAME` - Username for PostgreSQL
* `SPRING_DATASOURCE_PASSWORD` - Password for PostgreSQL
* `SPRING_DATASOURCE_DRIVER_CLASS_NAME` - Driver class for PostgreSQL
### Configuring MongoDB (runtime database - additional data)
The Process Engine requires to connect to Runtime MongoDB instance to access runtime build information. Use the following environment variables for configuration:
* `SPRING_DATA_MONGODB_RUNTIME_ENABLED` - Enables runtime MongoDB usage
* **Default Value:** `true`
* `SPRING_DATA_MONGODB_RUNTIME_URI` - URI for connecting to MongoDB for Runtime DB (`app-runtime`)
* Format: `mongodb://${DB_USERNAME}:${DB_PASSWORD}@,,:/?retryWrites=false`
* `DB_USERNAME`: ` app-runtime`
## Authorization & access roles
This section outlines the OAuth2 configuration settings for securing the Spring application, including resource server settings, security type, and access authorizations.
### Resource server settings (OAuth2 configuration)
Old configuration from \< v4.1 releases:
* `SECURITY_OAUTH2_BASE_SERVER_URL`: The base URL for the OAuth 2.0 Authorization Server, which is responsible for authentication and authorization for clients and users, it is used to authorize clients, as well as to issue and validate access tokens.
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID`: A unique identifier for a client application that is registered with the OAuth 2.0 Authorization Server, this is used to authenticate the client application when it attempts to access resources on behalf of a user.
* `SECURITY_OAUTH2_CLIENT_CLIENT_SECRET`: Secret Key that is used to authenticate requests made by an authorization client.
* `SECURITY_OAUTH2_REALM`: Security configuration env var in the Spring Security OAuth2 framework, it is used to specify the realm name used when authenticating with OAuth2 providers.
New configuration, starting from v4.1 release, available below.
| Environment variable | Description | Default Value |
| ----------------------------------------------------------------------- | ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
| `SPRING_SECURITY_OAUTH2_RESOURCE_SERVER_OPAQUE_TOKEN_INTROSPECTION_URI` | URI for token introspection to validate opaque tokens | `${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/token/introspect` |
| `SPRING_SECURITY_OAUTH2_RESOURCE_SERVER_OPAQUE_TOKEN_CLIENT_ID` | Client ID for token introspection | `${security.oauth2.client.client-id}` |
| `SPRING_SECURITY_OAUTH2_RESOURCE_SERVER_OPAQUE_TOKEN_CLIENT_SECRET` | Client secret for token introspection | `${security.oauth2.client.client-secret}` |
### Service account settings
This section contains the environment variables for configuring process engine service account.
| Environment Variable | Description | Default Value |
| ----------------------------------------------------- | ------------------------------------- | ------------------------------------- |
| `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_ID` | Client ID for the service account | `flowx-${SPRING_APPLICATION_NAME}-sa` |
| `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET` | Client secret for the service account | |
More details about the necessary service account can be found [**here**](../access-management/configuring-an-iam-solution#process-engine-service-account).
### Security configuration
| Environment variable | Description | Default Value |
| ---------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- |
| `SECURITY_TYPE` | Type of security mechanism used | `oauth2` |
| `SECURITY_BASIC_ENABLED` | Enable basic authentication | `false` |
| `SECURITY_PUBLIC_PATHS_0` | List of public paths that do not require authentication | `/api/platform/components-versions` |
| `SECURITY_PUBLIC_PATHS_1` | List of public paths that do not require authentication | `/manage/actuator/health` |
| `SECURITY_PATH_AUTHORIZATIONS_0_PATH` | Defines a security path or endpoint pattern. It specifies that the security settings apply to all paths under the `"/api/"` path. The `**` is a wildcard that means it includes all subpaths under` "/api/**"`. | `"/api/**"` |
| `SECURITY_PATH_AUTHORIZATIONS_0_ROLES_ALLOWED` | Specifies the roles allowed for accessing the specified path. In this case, the roles allowed are empty (""). This might imply that access to the `"/api/**"` paths is open to all users or that no specific roles are required for authorization. | `"ANY_AUTHENTICATED_USER"` |
## Configuring Kafka
Kafka handles all communication between the FlowX.AI Engine and external plugins and integrations. It is also used for notifying running process instances when certain events occur.
### Kafka connection settings
| Environment Variable | Description | Default Value |
| -------------------------------- | --------------------------- | ---------------- |
| `SPRING_KAFKA_BOOTSTRAP_SERVERS` | Kafka bootstrap servers | `localhost:9092` |
| `SPRING_KAFKA_SECURITY_PROTOCOL` | Security protocol for Kafka | `"PLAINTEXT"` |
### Kafka consumer retry settings
| Environment Variable | Description | Default Value |
| ------------------------------------- | ------------------------------------------------------------------ | ------------- |
| `KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL` | Interval between retries after `AuthorizationException` is thrown. | `10` |
### Consumer groups & consumer threads configuration
Both a producer and a consumer must be configured:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/engine_kafka_pattern.svg)
### Consumer groups & consumer threads
In Kafka a consumer group is a group of consumers that jointly consume and process messages from one or more Kafka topics. Each consumer group has a unique identifier called a group ID, which is used by Kafka to manage message consumption and distribution among the members of the group.
Thread numbers, on the other hand, refer to the number of threads that a consumer application uses to process messages from Kafka. By default, each consumer instance runs in a single thread, which can limit the throughput of message processing. Increasing the number of consumer threads can help to improve the parallelism and efficiency of message consumption, especially when dealing with high message volumes.
Both group IDs and thread numbers can be configured in Kafka to optimize the processing of messages according to specific requirements, such as message volume, message type, and processing latency.
The configuration related to consumers (group ids and thread numbers) can be configured separately for each message type as it follows:
#### Consumer group configuration
| Environment Variable | Description | Default Value |
| -------------------------------------------------- | ------------------------------------------------------------------- | ------------------ |
| `KAFKA_CONSUMER_GROUP_ID_NOTIFY_ADVANCE` | Group ID for notifying advance actions | `notif123-preview` |
| `KAFKA_CONSUMER_GROUP_ID_NOTIFY_PARENT` | Group ID for notifying when a subprocess is blocked | `notif123-preview` |
| `KAFKA_CONSUMER_GROUP_ID_ADAPTERS` | Group ID for messages related to adapters | `notif123-preview` |
| `KAFKA_CONSUMER_GROUP_ID_SCHEDULER_RUN_ACTION` | Group ID for running scheduled actions | `notif123-preview` |
| `KAFKA_CONSUMER_GROUP_ID_SCHEDULER_ADVANCING` | Group ID for messages indicating continuing advancement | `notif123-preview` |
| `KAFKA_CONSUMER_GROUP_ID_PROCESS_START` | Group ID for starting processes | `notif123-preview` |
| `KAFKA_CONSUMER_GROUP_ID_PROCESS_START_FOR_EVENT` | Group ID for starting processes for an event | `notif123-preview` |
| `KAFKA_CONSUMER_GROUP_ID_PROCESS_EXPIRE` | Group ID for expiring processes | `notif123-preview` |
| `KAFKA_CONSUMER_GROUP_ID_PROCESS_OPERATIONS` | Group ID for processing operations from Task Management plugin | `notif123-preview` |
| `KAFKA_CONSUMER_GROUP_ID_PROCESS_BATCH_PROCESSING` | Group ID for processing bulk operations from Task Management plugin | `notif123-preview` |
#### Consumer thread configuration
| Environment Variable | Description | Default Value |
| ------------------------------------------------- | --------------------------------------------------------------------- | ------------- |
| `KAFKA_CONSUMER_THREADS_NOTIFY_ADVANCE` | Number of threads for notifying advance actions | `6` |
| `KAFKA_CONSUMER_THREADS_NOTIFY_PARENT` | Number of threads for notifying when a subprocess is blocked | `6` |
| `KAFKA_CONSUMER_THREADS_ADAPTERS` | Number of threads for processing messages related to adapters | `6` |
| `KAFKA_CONSUMER_THREADS_SCHEDULER_ADVANCING` | Number of threads for continuing advancement | `6` |
| `KAFKA_CONSUMER_THREADS_SCHEDULER_RUN_ACTION` | Number of threads for running scheduled actions | `6` |
| `KAFKA_CONSUMER_THREADS_PROCESS_START` | Number of threads for starting processes | `6` |
| `KAFKA_CONSUMER_THREADS_PROCESS_START_FOR_EVENT` | Number of threads for starting processes for an event | `2` |
| `KAFKA_CONSUMER_THREADS_PROCESS_EXPIRE` | Number of threads for expiring processes | `6` |
| `KAFKA_CONSUMER_THREADS_PROCESS_OPERATIONS` | Number of threads for processing operations from task management | `6` |
| `KAFKA_CONSUMER_THREADS_PROCESS_BATCH_PROCESSING` | Number of threads for processing bulk operations from task management | `6` |
It is important to know that all the events that start with a configured pattern will be consumed by the Engine. This makes it possible to create a new integration and connect it to the engine without changing the configuration.
### Configuring Kafka topics
The suggested topic pattern naming convention is the following:
```yaml
topic:
naming:
package: "ai.flowx."
environment: "dev."
version: ".v1"
prefix: ${kafka.topic.naming.package}${kafka.topic.naming.environment}
suffix: ${kafka.topic.naming.version}
engineReceivePattern: engine.receive.
pattern: ${kafka.topic.naming.prefix}${kafka.topic.naming.engineReceivePattern}*
```
| Environment Variable | Description | Default FlowX.AI value (can be overwritten) |
| ------------------------------------ | ----------------------------------------------------------------------- | --------------------------------------------- |
| `KAFKA_TOPIC_PROCESS_NOTIFY_ADVANCE` | Kafka topic used internally by the Engine for advancing processes | `ai.flowx.dev.core.notify.advance.process.v1` |
| `KAFKA_TOPIC_PROCESS_NOTIFY_PARENT` | Kafka topic used for sub-processes to notify the parent process | `ai.flowx.dev.core.notify.parent.process.v1` |
| `KAFKA_TOPIC_PATTERN` | The topic name pattern that the Engine listens on for incoming events | `ai.flowx.dev.engine.receive.*` |
| `KAFKA_TOPIC_LICENSE_OUT` | The topic name used by the Engine to generate licensing-related details | `ai.flowx.dev.core.trigger.save.license.v1` |
### Topics related to the Task Management plugin
| Default parameter (env var) | Description | Default FlowX.AI value (can be overwritten) |
| ---------------------------------------- | ------------------------------------------------------------------------- | ------------------------------------------------ |
| `KAFKA_TOPIC_TASK_OUT` | Kafka topic used for sending notifications to the plugin | `ai.flowx.dev.plugin.tasks.trigger.save.task.v1` |
| `KAFKA_TOPIC_PROCESS_OPERATIONS_IN` | Kafka topic used for receiving calls from the task management plugin with | `ai.flowx.dev.core.trigger.operations.v1` |
| | information regarding operations performed | |
| `KAFKA_TOPIC_PROCESS_OPERATIONS_BULK_IN` | Kafka topic where operations can be performed in bulk, allowing multiple | `ai.flowx.core.trigger.operations.bulk.v1` |
| | operations to be sent at once | |
#### OPERATIONS\_IN request example
```json
{
"operationType": "UNASSIGN", //type of operation performed in Task Management plugin
"taskId": "some task id",
"processInstanceUuid": "1cff0b7d-966b-4b35-9e9b-63b1d6757ec6",
"swimlaneName": "Default",
"swimlaneId": "51ec1241-fe06-4576-9c84-31598c05c527",
"owner": {
"firstName": null,
"lastName": null,
"username": "service-account-flowx-process-engine-account",
"enabled": false
},
"author": "admin@flowx.ai"
}
```
#### BULK\_IN request example
```json
{
"operations": [
{
"operationType": "HOLD",
"taskId": "some task id",
"processInstanceUuid": "d3aabfd8-d041-4c62-892f-22d17923b223", // the id of the process instance
"swimlaneName": "Default", //name of the swimlane
"owner": null,
"author": "john.doe@flowx.ai",
},
{
"operationType": "HOLD",
"taskId": "some task id",
"processInstanceUuid": "d3aabfd8-d041-4c62-892f-22d17923b223",
"swimlaneName": "Default", //name of the swimlane
"owner": null,
"author": "john.doe@flowx.ai",
}
]
}
```
If you need to send additional keys on the response, attach them in the header, as in the following example, where we used `requestID` key.
A response should be sent on a `callbackTopic` if it is mentioned in the headers, as in the following example:
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/platform-deep-dive/bulk_requestid.png)
```json
{"processInstanceId": ${processInstanceId}, "callbackTopic": "test.operations.out", "requestID":"1234567890"}
```
Task manager operations could be the following: assignment, unassignment, hold, unhold, terminate and it is matched with the `...operations.out` topic on the engine side. For more information check the Task Management plugin documentation:
📄 [**Task management plugin**](/4.0/docs/platform-deep-dive/plugins/custom-plugins/task-management/task-management-overview)
### Topics related to the scheduler extension
[Scheduler](/4.0/docs/platform-deep-dive/core-extensions/scheduler)
| Environment variable | Description | Default FlowX.AI value (can be overwritten) |
| -------------------------------------------- | ----------------------------------------------------------------------------------- | ---------------------------------------------- |
| `KAFKA_TOPIC_PROCESS_EXPIRE_IN` | Topic name for requests to expire processes | `ai.flowx.dev.core.trigger.expire.process.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_SET` | Topic name used for scheduling process expiration | `ai.flowx.dev.core.trigger.set.schedule.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_STOP` | Topic name used for stopping process expiration | `ai.flowx.dev.core.trigger.stop.schedule.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULE_IN_RUN_ACTION` | Topic name for requests to run scheduled actions | `ai.flowx.dev.core.trigger.run.action.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULE_IN_ADVANCE` | Topic name for events related to advancing through a database sent by the scheduler | `ai.flowx.dev.core.trigger.advance.process.v1` |
### Topics related to Timer Events
| Environment variable | Description | Default FlowX.AI value (can be overwritten) |
| ----------------------------------------------------- | ----------------------------------------------- | -------------------------------------------------------- |
| `KAFKA_TOPIC_PROCESS_SCHEDULED_TIMER_EVENTS_OUT_SET` | Used to communicate with Scheduler microservice | `ai.flowx.dev.core.trigger.set.timer-event-schedule.v1` |
| `KAFKA_TOPIC_PROCESS_SCHEDULED_TIMER_EVENTS_OUT_STOP` | Used to communicate with Scheduler microservice | `ai.flowx.dev.core.trigger.stop.timer-event-schedule.v1` |
### Topics related to the Search Data service
| Environment variable | Description | Default FlowX.AI value (can be overwritten) |
| ----------------------------- | ------------------------------------------------------------------------------ | --------------------------------------------------------- |
| `KAFKA_TOPIC_DATA_SEARCH_IN` | The topic name that the Engine listens on for requests to search for processes | `ai.flowx.dev.core.trigger.search.data.v1` |
| `KAFKA_TOPIC_DATA_SEARCH_OUT` | The topic name used by the Engine to reply after finding a process | `ai.flowx.dev.engine.receive.core.search.data.results.v1` |
### Topics related to the Audit service
| Environment variable | Description | Default FlowX.AI value (can be overwritten) |
| ----------------------- | -------------------------------- | ------------------------------------------- |
| `KAFKA_TOPIC_AUDIT_OUT` | Topic key for sending audit logs | `ai.flowx.dev.core.save.audit.v1` |
### Topics related to ES indexing
| Environment variable | Default FlowX.AI value (can be overwritten) |
| --------------------------------- | ------------------------------------------- |
| KAFKA\_TOPIC\_PROCESS\_INDEX\_OUT | ai.flowx.dev.core.index.process.v1 |
### Processes that can be started by sending messages to a Kafka topic
| Environment variable | Description | Default FlowX.AI value (can be overwritten) |
| ------------------------------- | ----------------------------------------------------------------------------- | -------------------------------------------- |
| `KAFKA_TOPIC_PROCESS_START_IN` | The Engine listens on this topic for requests to start a new process instance | `ai.flowx.dev.core.trigger.start.process.v1` |
| `KAFKA_TOPIC_PROCESS_START_OUT` | Used for sending out the reply after starting a new process instance | `ai.flowx.dev.core.confirm.start.process.v1` |
### Topics related to Message Events
| Environment variable | Default FLOWX.AI value (can be overwritten) |
| ---------------------------------------- | ---------------------------------------------------- |
| KAFKA\_TOPIC\_PROCESS\_EVENT\_MESSAGE | ai.flowx.dev.core.message.event.process.v1 |
| KAFKA\_TOPIC\_PROCESS\_START\_FOR\_EVENT | ai.flowx.dev.core.trigger.start-for-event.process.v1 |
### Topics related to Events-gateway microservice
| Environment variable | Description | Default FlowX.AI value (can be overwritten) |
| -------------------------------------------- | --------------------------------------------------------- | ---------------------------------------------------- |
| KAFKA\_TOPIC\_EVENTSGATEWAY\_OUT\_MESSAGE | Outgoing messages from process-engine to events-gateway | ai.flowx.eventsgateway.engine.commands.message.v1 |
| KAFKA\_TOPIC\_EVENTSGATEWAY\_OUT\_DISCONNECT | Disconnect commands from process-engine to events-gateway | ai.flowx.eventsgateway.engine.commands.disconnect.v1 |
| KAFKA\_TOPIC\_EVENTSGATEWAY\_OUT\_CONNECT | Connect commands from process-engine to events-gateway | ai.flowx.eventsgateway.engine.commands.connect.v1 |
## Configuring file upload size
The maximum file size allowed for uploads can be set by using the following environment variables:
| Environment variable | Description | Default FlowX.AI value (can be overwritten) |
| ---------------------------------------------- | ---------------------------------------- | ------------------------------------------- |
| SPRING\_SERVLET\_MULTIPART\_MAX\_FILE\_SIZE | Maximum file size allowed for uploads | `50MB` |
| SPRING\_SERVLET\_MULTIPART\_MAX\_REQUEST\_SIZE | Maximum request size allowed for uploads | `50MB` |
## Connecting the Advancing controller
To use advancing controller, the following env vars are needed for `process-engine` to connect to Advancing Postgres DB:
| Environment variable | Description |
| -------------------------------- | -------------------------------------------------------------------------------------------------------------------- |
| ADVANCING\_DATASOURCE\_JDBC\_URL | Specifies the connection URL for a JDBC data source, including the server, port, database name, and other parameters |
| ADVANCING\_DATASOURCE\_USERNAME | Used to authenticate the user's access to the data source |
| ADVANCING\_DATASOURCE\_PASSWORD | Sets the password for a data source connection |
### Configuring the Advancing controller
| Environment variable | Description | Default FlowX.AI value (can be overwritten) |
| ----------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- |
| ADVANCING\_TYPE | Specifies the type of advancing mechanism to be used. The advancing can be done either through Kafka or through the database (parallel) | `PARALLEL` (possible values #enum: KAFKA, PARALLEL) |
| ADVANCING\_THREADS | Number of parallel threads to be used | `20` |
| ADVANCING\_PICKING\_BATCH\_SIZE | Number of tasks to pick in each batch | `10` |
| ADVANCING\_PICKING\_PAUSE\_MILLIS | Pause duration between picking batches, in milliseconds. After picking a batch of tasks, the system will wait for 100 milliseconds before picking the next batch. This can help in controlling the rate of task intake and processing | `100` |
| ADVANCING\_COOLDOWN\_AFTER\_SECONDS | Cooldown period after processing a batch, in seconds. The system will wait for 120 seconds after completing a batch before starting the next cycle. This can be useful for preventing system overload and managing resource usage | `120` |
| ADVANCING\_SCHEDULER\_HEARTBEAT\_CRONEXPRESSION | A cron expression that defines the schedule for the heartbeat. The scheduler's heartbeat will trigger every 2 seconds. This frequent heartbeat can be used to ensure the system is functioning correctly and tasks are being processed as expected | `"*/2 * * * * ?"` |
## Configuring cleanup mechanism
This section contains environment variables that configure the scheduler's behavior, including thread count, cron jobs for data partitioning, process cleanup, and master election.
| Environment Variable | Description | Default Value | Possible Values |
| ------------------------------------------- | ------------------------------------------------- | -------------------------------------------------------------------------------------------- | --------------------------------------- |
| `SCHEDULER_THREADS` | Number of threads for the scheduler | `10` | Integer values (e.g., `10`, `20`) |
| `SCHEDULER_PROCESS_CLEANUP_ENABLED` | Activates the cron job for process cleanup | `false` | `true`, `false` |
| `SCHEDULER_PROCESS_CLEANUP_CRON_EXPRESSION` | Cron expression for the process cleanup scheduler | `0 */5 0-5 * * ?` -> every day during the night, every 5 minutes, at the start of the minute | Cron expression (e.g., `0 0 1 * * ?`) |
| `SCHEDULER_PROCESS_CLEANUP_BATCH_SIZE` | Number of processes to be cleaned up in one batch | `1000` | Integer values (e.g., `100`, `1000`) |
| `SCHEDULER_MASTER_ELECTION_CRON_EXPRESSION` | Cron expression for the master election process | `30 */3 * * * ?` -> master election every 3 minutes | Cron expression (e.g., `0 0/3 * * * ?`) |
## Managing subprocesses expiration
This section details the environment variable that controls the expiration of subprocesses within a parent process. It determines whether subprocesses should terminate when the parent process expires or follow their own expiration settings.
| Environment Variable | Description | Default Value | Possible Values |
| ----------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | --------------- |
| `FLOWX_PROCESS_EXPIRE_SUBPROCESSES` | Governs subprocess expiration in a parent process. When true, terminates all associated subprocesses upon parent process expiration. When false, subprocesses follow their individual expiration settings or persist indefinitely if not configured | `true` | `true`, `false` |
## Configuring application management
The FlowX helm chart provides a management service with the necessary parameters to integrate with the Prometheus operator. However, this integration is disabled by default.
Old configuration from \< v4.1 releases (will be deprecated in v4.5):
* `MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED`: Enables or disables Prometheus metrics export.
New configuration, starting from v4.1 release, available below. Note that this setup is backwards compatible, it does not affect the configuration from v3.4.x. The configuration files will still work until v4.5 release.
To configure Prometheus metrics export for the FlowX.AI Engine, the following environment variable is required:
| Environment Variable | Description | Default Value | Possible Values |
| ---------------------------------------------- | ---------------------------------------------- | ------------- | --------------- |
| `MANAGEMENT_PROMETHEUS_METRICS_EXPORT_ENABLED` | Enables or disables Prometheus metrics export. | `false` | `true`, `false` |
## RBAC configuration
Process Engine requires specific RBAC (Role-Based Access Control) permissions to ensure proper access to Kubernetes resources like pods for database lock management. This configuration enables the necessary RBAC rules.
* `rbac.create`: Set to `false` to avoid creating additional RBAC resources by default. Custom rules can still be added if required.
* `rbac.rules`: Define custom RBAC rules for database locking, as shown below.
```yaml
rbac:
create: true
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
- pods
verbs:
- get
- list
- watch
```
# Partitioning & archiving
Improving data management using data partitioning and the archival processes.
## Overview
Starting with release v4.1.1 you can enable data partitioning on FlowX.AI Engine.
Partitioning and archiving are data management strategies used to handle large volumes of data efficiently. They improve database performance, manageability, and storage optimization. By dividing data into partitions and archiving older or less frequently accessed data, organizations can ensure better data management, quicker query responses, and optimized storage use.
## Partitioning
Partitioning is the process of dividing a large database table into smaller, more manageable pieces, called partitions. Each partition can be managed and queried independently. The primary goal is to improve performance, enhance manageability, and simplify maintenance tasks such as backups and archiving.
Partitions can be created per day, week or month. This means that a partition ID is computed at insert time for each row of process\_instance and related tables.
Afterwards, a retention period can be setup (eg: 3 partitions). A flowx engine-driven cron job (with configurable start interval) will check if there is any partition which needs to be archived and will perform the necessary actions.
### Benefits of partitioning
* **Improved Query Performance**: By scanning only relevant partitions instead of the entire table.
* **Simplified Maintenance**: Easier to perform maintenance tasks like backups and index maintenance.
* **Data Management**: Better data organization and management by dividing data based on specific criteria such as date, range, or list.
Database Compatibility: OracleDB and PostgreSQL.
## Archiving
Archiving involves moving old data from the primary tables to separate archive tables. This helps in reducing the load on the primary tables, thus improving performance and manageability.
### Benefits of archiving
* **Storage Optimization**: Archived data can be compressed, saving storage space.
* **Performance Improvement**: Reduces the volume of data in primary tables, leading to faster query responses.
* **Historical Data Management**: Maintains historical data in a separate, manageable form.
## Partitioning and archiving in OracleDB
### OracleDB partitioning
OracleDB [**partitioned tables**](https://docs.oracle.com/en/database/oracle/oracle-database/18/vldbg/partition-concepts.html#GUID-6D369646-16AF-487B-BF32-5F6569D27C8A) are utilized, allowing for efficient data management and query performance.
Each partition can be managed and queried independently.
Example: A `process_instance` table can be partitioned by day, week, or month.
Improved query performance by scanning only relevant partitions.
Simplified maintenance tasks like backups and archiving.
### OracleDB archiving
Detaching the partition from the main table.
Converting the detached partition into a new table named `archived_${table_name}_${interval_name}_${reference}` for example: `archived_process_instance_monthly_2024_03` .
The DATE is in the format `yyyy-MM-dd` if the interval is `DAY`, or in the format `yyyy-MM` if the interval is `MONTH`, or in the format `yyyy-weekOfYear` if the interval is `WEEK`.
Identify partitions eligible for archiving based on retention settings.
Detach the partition from the main table.
Create a new table with the data from the detached partition.
Optionally compress the new table to save space.
Manages historical data by moving it to separate tables, reducing the load on the main table.
Compression options (OFF, BASIC, ADVANCED) further optimize storage.
Oracle offers several compression options—OFF, BASIC, and ADVANCED—that optimize storage. Each option provides varying levels of data compression, impacting storage savings and performance differently.
* **OFF**: No compression is applied.
* **BASIC**: Suitable for read-only or read-mostly environments, it compresses data without requiring additional licenses.
* **ADVANCED**: Offers the highest level of compression, supporting a wide range of data types and operations. It requires an Advanced Compression license and provides significant storage savings and performance improvements by keeping data compressed in memory.
For more details, you can refer to Oracle's [**Advanced Compression documentation**](https://www.oracle.com/database/advanced-compression/#rc30related).
## PostgreSQL archiving
Creating a new table for the partition being archived.
Moving data from the main table to the new archive table (in batches).
Since here the DB has more work to do than just changing some labels (actual data insert and delete vs relabel a partition into a table, for OracleDB) the data move is batched and the batch size is configurable.
Identify partitions eligible for archiving based on retention settings.
Create a new table following this naming convention: `archived__${table_name}__${interval_name}_${reference}`, example: *archived\_\_process\_instance\_\_weekly\_2024\_09* .
The new table is created (same name format as for OracleDB: `archived__${table_name}__${interval_name}_${reference}`) and data is moved from the primary table here.
Configure the batch size for data movement to control the load on the database.
Efficiently manages historical data.
Batch processing allows for better control over the archiving process and system performance.
Differences from Oracle DBs:
Archiving involves actual data movement (insert and delete operations), unlike OracleDBs where it is mainly a relabeling of partitions.
The batch size for data movement is configurable, allowing fine-tuning of the archiving process.
## Daily operations
Once set up, the partitioning and archiving process involves the following operations:
The `partition_id` is automatically calculated based on the configured interval (DAY, WEEK, MONTH).
Example for daily partitioning: `2024-03-01 13:00:00` results in `partition_id = 124061`. See the [**Partition ID calculation**](#partition-id-calculation) section.
Data older than the configured retention interval becomes eligible for archiving and compressing.
A cron job checks for eligible partitions.
Eligible partitions are archived and optionally compressed.
The process includes deactivating foreign keys, creating new archive tables, moving data references, reactivating foreign keys, and dropping the original partition.
Archived tables remain in the database but occupy less space if compression is enabled.
**Recommendation**: to free up space, consider moving archived tables to a different database or tablespace. Additionally, you have the option to move only process instances or copy definitions depending on your needs.
When enabling partitioning, please consider the following:
**Ensure Process Termination**: Make sure that process instances get terminated. Archiving removes process instance data from the working data, making it not available in FlowX. Started instances should be finished before archiving takes place.
**Set Process Expiry**: To ensure termination of process instances prior to archiving, it is recommended to configure process expiration. Refer to the following section for guidance on setting up process expiry using FlowX Designer:
[Timer Expressions](../../docs/building-blocks/node/timer-events/timer-expressions)
Future schema updates or migrations will not affect archived tables. They retain the schema from the moment of archiving.
## Configuring partitioning and archiving
The Partitioning and Archiving feature is optional and can be configured as needed.
When starting a new version of the process-engine, we recommend manually executing the setup SQL commands from Liquibase, as they may take more time. After setup, all existing information will go into the initial partition.
This section contains environment variables that control the settings for data partitioning and archiving and also for the archiving scheduler. These settings determine how data is partitioned, retained, and managed, including compression and batch processing.
| Environment Variable | Description | Default Value | Possible Values |
| -------------------------------------------------------- | ---------------------------------------------------------------------------- | ------------------------------------ | ------------------------------------- |
| `FLOWX_DATA_PARTITIONING_ENABLED` | Activates data partitioning. | `false` | `true`, `false` |
| `FLOWX_DATA_PARTITIONING_INTERVAL` | Interval for partitioning (the time interval contained in a partition). | `MONTH` | `DAY`, `WEEK`, `MONTH` |
| `FLOWX_DATA_PARTITIONING_RETENTION_INTERVALS` | Number of intervals retained in the FlowX database (for partitioned tables). | `3` | Integer values (e.g., `1`, `2`, `3`) |
| `FLOWX_DATA_PARTITIONING_DETACHED_PARTITION_COMPRESSION` | Enables compression for archived (detached) partitions (Oracle only). | `OFF` | `OFF`, `BASIC`, `ADVANCED` |
| `FLOWX_DATA_PARTITIONING_MOVED_DATA_BATCH_SIZE` | Batch size for moving data (PostgreSQL only). | `5000` | Integer values (e.g., `1000`, `5000`) |
| `SCHEDULER_DATA_PARTITIONING_ENABLED` | Activates the cron job for archiving partitions. | `true` | `true`, `false` |
| `SCHEDULER_DATA_PARTITIONING_CRON_EXPRESSION` | Cron expression for the data partitioning scheduler. | `0 0 1 * * ?` -> every day at 1:00AM | Cron expression (e.g., `0 0 1 * * ?`) |
Compression for archived (detached) partitions is available only for Oracle DBs.
The batch size setting for archiving data is available only for PostgreSQL DBs.
## Logging information
Partitioning and archiving actions are logged in two tables:
* `DATA_PARTITIONING_LOG`: For tracking archived partitions.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/partitioning_log.png)
* `DATA_PARTITIONING_LOG_ENTRY`: For logging SQL commands executed for archiving.
![](https://s3.eu-west-1.amazonaws.com/docx.flowx.ai/post41/partitioning_log_entry.png)
## Enabling partitioning and Elasticsearch indexing strategy
When partitioning is enabled, the Elasticsearch indexing strategy must also be enabled and configured on FlowX Engine setup.
**Why?**
* When archiving process instances, data from Elasticsearch must be deleted, not just the cache but also indexed keys (e.g., those indexed for data search on process instances).
### Elasticsearch indexing configuration
Check the Elasticsearch indexing setup here:
The partitioning configuration must be aligned with the configuration extracted from the Kafka Elasticsearch Connector, especially with the following environment variables, so the intervals are similar:
### Index partitioning
* `transforms.routeTS.topic.format: "process_instance-${timestamp}"`: This value must start with the index name defined in the process-engine config: flowx.indexing.processInstance.index-name. In this example, the index name is prefixed with "process\_instance-" and appended with a timestamp for dynamic index creation. For backward compatibility, the prefix must be "process\_instance-". However, backward compatibility is not strictly required here.
yaml
* `transforms.routeTS.timestamp.format: "yyyyMMdd"`: This format ensures that timestamps are consistently represented and easily parsed when creating or searching for indices based on the process instance start date. You can adjust this value as needed (e.g., for monthly indexes, use "yyyyMM"). However, note that changing this format will cause existing indexed objects to remain unchanged, and update messages will be treated as new objects, indexed again in new indices. It is crucial to determine your index size and maintain consistency.
Check the following Kafka Elasticsearch Connector configuration example for more details:
## Technical details
### Partition ID calculation
* The `partition_id` format follows this structure: ``. This ID is calculated based on the start date of the `process_instance`, the partition interval, and the partition level.
* `LEVEL`: This represents the "Partitioning level," which increments with each change in the partitioning interval (for example, if it changes from `DAY` to `MONTH` or vice versa).
* `YEAR`: The year extracted from the `process_instance` date.
* `BIN_ID_OF_YEAR`: This is the ID of a bucket associated with the `YEAR`. It is created for all instances within the selected partitioning interval. The maximum number of buckets is determined by the partitioning frequency:
* **Daily**: Up to 366 buckets per year
* **Weekly**: Up to 53 buckets per year
* **Monthly**: Up to 12 buckets per year
#### Calculation example
For a timestamp of `2024-03-01 13:00:00` with a daily partitioning interval, the `partition_id`would be `124061`:
* `1`: Partitioning Level (`LEVEL`)
* `24`: Year - 2024 (`YEAR`)
* `061`: Bucket per configuration (61st day of the year)
### Archived tables
* Naming format: `archived__${table_name}__${interval_name}_${reference}`. Examples:
* archived\_\_process\_instance\_\_monthly\_2024\_03
* archived\_\_process\_instance\_\_weekly\_2024\_09
* archived\_\_process\_instance\_\_daily\_2024\_03\_06
# Integration designer setup
This guide provides step-by-step instructions to set up and configure the Integration Designer service, including database, Kafka, and OAuth2 authentication settings, to ensure integration and data flow management
## Infrastructure prerequisites
The Integration Designer service requires the following components to be set up before it can be started:
* **PostgreSQL** - version 13 or higher for managing advancing data source
* **MongoDB** - version 4.4 or higher for managing integration and runtime data
* **Kafka** - version 2.8 or higher for event-driven communication between services
* **OAuth2 Authentication** - Ensure a Keycloak server or compatible OAuth2 authorization server is configured
## Dependencies
* [**Database configuration**](#database-configuration)
* [**Kafka configuration**](#configuring-kafka)
* [**Authentication & access roles**](#configuring-authentication-and-access-roles)
* [**Logging**](./setup-guides-overview#logging)
## Configuration
### Database configuration
Integration Designer uses both PostgreSQL and MongoDB for managing advancing data and integration information. Configure these database connections with the following environment variables:
#### PostgreSQL (Advancing data source)
* `ADVANCING_DATASOURCE_URL` - Database URL for the advancing data source in PostgreSQL
* `ADVANCING_DATASOURCE_USERNAME` - Username for the advancing data source in PostgreSQL
#### MongoDB (Integration data and runtime data)
Integration Designer requires two MongoDB databases for managing integration-specific data and runtime data. The `integration-designer` database is dedicated to Integration Designer, while the shared `app-runtime` database supports multiple services.
* **Integration Designer Database** (`integration-designer`) - Stores data specific to Integration Designer, such as integration configurations and metadata.
* **Shared Runtime Database** (`app-runtime`) - Used across multiple services for runtime data.
Set up these MongoDB connections with the following environment variables:
* `SPRING_DATA_MONGODB_URI`- URI for connecting to the Integration Designer MongoDB instance
* Format: `mongodb://${DB_USERNAME}:${DB_PASSWORD}@,,:/?retryWrites=false`
* `DB_USERNAME`: `integration-designer`
* `SPRING_DATA_MONGODB_STORAGE` - Specifies the storage type used for the Runtime MongoDB instance (Azure environments only)
* **Possible Values:** `mongodb`, `cosmosdb`
* **Default Value:** `mongodb`
* `SPRING_DATA_MONGODB_RUNTIME_ENABLED` - Enables runtime MongoDB usage
* **Default Value:** `true`
* `SPRING_DATA_MONGODB_RUNTIME_URI` - URI for connecting to MongoDB for Runtime MongoDB (`app-runtime`)
* **Format**: `SPRING_DATA_MONGODB_RUNTIME_URI` : `mongodb://${DB_USERNAME}:${DB_PASSWORD}@,,:/??retryWrites=false`
* `DB_USERNAME`: `app-runtime`
### Configuring Kafka
To configure Kafka for Integration Designer, set the following environment variables. This configuration includes naming patterns, consumer group settings, and retry intervals for authentication exceptions.
#### General Kafka configuration
* `SPRING_KAFKA_BOOTSTRAP_SERVERS` - Address of the Kafka server in the format `host:port`
* `KAFKA_TOPIC_NAMING_ENVIRONMENT` - Environment-specific suffix for Kafka topics
* `FLOWX_WORKFLOW_CREATETOPICS` - To automatically create kafka topics for development environments
* **When set to true**: In development environments, where Kafka topics may need to be created automatically, this configuration can be enabled (flowx.workflow\.createTopics: true). This allows for the automatic creation of "in" and "out" topics when workflows are created, eliminating the need to wait for topic creation at runtime.
* **Default setting (false)**: In production or controlled environments, where automated topic creation is not desired, this setting remains false to prevent unintended Kafka topic creation.
#### Kafka consumer settings
* `KAFKA_CONSUMER_GROUP_ID_START_WORKFLOWS` - Consumer group ID for starting workflows
* **Default Value:** `start-workflows-group`
* `KAFKA_CONSUMER_THREADS_START_WORKFLOWS` - Number of Kafka consumer threads for starting workflows
* **Default Value:** `3`
* `KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL` - Interval (in seconds) between retries after an `AuthorizationException`
* **Default Value:** `10`
#### Kafka topic naming structure
The Kafka topics for Integration Designer use a structured naming convention with dynamic components, allowing for easy integration across environments. This setup defines separators, environment identifiers, and specific naming patterns for both engine and integration-related messages.
#### Topic naming components
| Component | Description | Default Value |
| ------------- | -------------------------------------------------- | ---------------------------------------------------------------- |
| `package` | Package identifier for namespace | `ai.flowx.` |
| `environment` | Environment identifier | `dev.` |
| `version` | Version identifier for topic compatibility | `.v1` |
| `separator` | Primary separator for components | `.` |
| `separator2` | Secondary separator for additional distinction | `-` |
| `prefix` | Combines package and environment as a topic prefix | `${kafka.topic.naming.package}${kafka.topic.naming.environment}` |
| `suffix` | Appends version to the end of the topic name | `${kafka.topic.naming.version}` |
##### Predefined patterns for services
* **Engine Receive Pattern** - `kafka.topic.naming.engineReceivePattern`
* **Pattern:** `engine${dot}receive${dot}`
* **Example Topic Prefix:** `ai.flowx.dev.engine.receive.`
* **Integration Receive Pattern** - `kafka.topic.naming.integrationReceivePattern`
* **Pattern:** `integration${dot}receive${dot}`
* **Example Topic Prefix:** `ai.flowx.dev.integration.receive.`
#### Kafka topics
* **Events Gateway - Outgoing Messages**
* **Topic:** `${kafka.topic.naming.prefix}eventsgateway${dot}receive${dot}workflowinstances${kafka.topic.naming.suffix}`
* **Purpose:** Topic for outgoing workflow instance messages from the events gateway
* **Example Value:** `ai.flowx.dev.eventsgateway.receive.workflowinstances.v1`
* **Engine Pattern**
* **Pattern:** `${kafka.topic.naming.prefix}${kafka.topic.naming.engineReceivePattern}`
* **Purpose:** Topic pattern for receiving messages by the engine service
* **Example Value:** `ai.flowx.dev.engine.receive.*`
* **Integration Pattern**
* **Pattern:** `${kafka.topic.naming.prefix}${kafka.topic.naming.integrationReceivePattern}*`
* **Purpose:** Topic pattern for receiving messages by the integration service
* **Example Value:** `ai.flowx.dev.integration.receive.*`
Replace placeholders with appropriate values for your environment before starting the service.
### Configuring authentication and access roles
Integration Designer uses OAuth2 for secure access control. Set up OAuth2 configurations with these environment variables:
* `SECURITY_OAUTH2_BASE_SERVER_URL` - Base URL for the OAuth 2.0 Authorization Server
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID` - Unique identifier for the client application registered with the OAuth 2.0 server
* `SECURITY_OAUTH2_CLIENT_CLIENT_SECRET` - Secret key for authenticating requests made by the authorization client
* `SECURITY_OAUTH2_REALM` - The realm name for OAuth2 authentication
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_ID` - Client ID for the integration designer service account
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET` - Client Secret for the integration designer service account
Refer to the dedicated section for configuring user roles and access rights:
Refer to the dedicated section for configuring a service account:
#### Authentication and access roles
* `SECURITY_OAUTH2_BASE_SERVER_URL` - Base URL for the OAuth2 authorization server
* `SECURITY_OAUTH2_REALM` - Realm for OAuth2 authentication
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID` - Client ID for the Integration Designer OAuth2 client
* `SECURITY_OAUTH2_CLIENT_CLIENT_SECRET` - Client Secret for the Integration Designer OAuth2 client
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_ID` - Client ID for the Keycloak admin service account
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET` - Client Secret for the Keycloak admin service account
### Configuring loogging
To control the log levels for Integration Designer, set the following environment variables:
* `LOGGING_LEVEL_ROOT` - The log level for root Spring Boot microservice logs
* `LOGGING_LEVEL_APP` - The log level for application-level logs
### Configuring admin ingress
Integration Designer provides an admin ingress route, which can be enabled and customized with additional annotations for SSL certificates or routing preferences.
* **Enabled**: Set to `true` to enable the admin ingress route.
* **Hostname**: Define the hostname for admin access.
```yaml
ingress:
enabled: true
admin:
enabled: true
hostname: "{{ .Values.flowx.ingress.admin }}"
path: /integration(/|$)(.*)
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/cors-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,platform,Flowx-Platform
```
## Monitoring and maintenance
To monitor the performance and health of the Application Manager, use tools like Prometheus or Grafana. Configure Prometheus metrics with the following environment variable:
* `MANAGEMENT_PROMETHEUS_METRICS_EXPORT_ENABLED` - Enables or disables Prometheus metrics export (default: false).
### RBAC configuration
Integration Designer requires specific RBAC (Role-Based Access Control) permissions to access Kubernetes ConfigMaps and Secrets, which store necessary configurations and credentials. Set up these permissions by enabling RBAC and defining the required rules.
* `rbac.create`: Set to true to create RBAC resources.
* `rbac.rules`: Define custom RBAC rules as follows:
```yaml
rbac:
create: true
rules:
- apiGroups:
- ""
resources:
- secrets
- configmaps
- pods
verbs:
- get
- list
- watch
```
This configuration grants read access (`get`, `list`, `watch`) to ConfigMaps, Secrets, and Pods, which is essential for retrieving application settings and credentials required by Integration Designer.
# License Engine setup
The License Engine is a service that can be set up using a Docker image. This guide will walk you through the process of setting up the License service and configuring it to meet your needs.
## Infrastructure prerequisites
* **DB instance**
* **Kafka** - version 2.8 or higher
* [**FlowX Designer**](./designer-setup-guide) deployment
## Dependencies
It has the following dependencies:
* **Postgres database** - the License Engine uses a Postgres database to store license-related data. The database should be set up with basic configuration properties specified in the helm values.yaml file, these properties include the name of the database, username and password, and resources such as CPU and memory limits
* **Connection to the same Kafka instance as the engine** - the License Engine needs to be able to communicate with the FLOWX.AI Engine using Kafka; the Kafka instance used by the engine should be the same one used by the License Engine
* **Routing of requests through NGINX** - requests made to the License Engine should be routed through the FLOWX.AI Designer using NGINX, the configuration for the Designer should be updated to also expose the REST API of the License Engine by adding a path in `flowx-admin-plugins-subpaths`
## Configuration
The service comes with most of the needed configuration properties filled in, but there are a few that need to be set up using some custom environment variables.
### Configuring Postgres database
The basic Postgres configuration is specified in the helm values.yaml file. This file includes properties such as the name of the database, username and password, and resources such as CPU and memory limits.
```yaml
licencedb:
existingSecret: {{secretName}}
metrics:
enabled: true
service:
annotations:
prometheus.io/port: {{prometheus port}}
prometheus.io/scrape: "true"
type: ClusterIP
serviceMonitor:
additionalLabels:
release: prometheus-operator
enabled: true
interval: 30s
scrapeTimeout: 10s
persistence:
enabled: true
size: 1Gi
postgresqlDatabase: license-coredb
postgresqlExtendedConf:
maxConnections: 200
sharedBuffers: 128MB
postgresqlUsername: postgres
resources:
limits:
cpu: 6000m
memory: 2048Mi
requests:
cpu: 200m
memory: 512Mi
```
### OpenID connect settings
* `SECURITY_TYPE`: Indicates that OAuth 2.0 is the chosen security type, default value: `oauth2`.
```yaml
security:
type: oauth2
```
* `SECURITY_PATHAUTHORIZATIONS_0_PATH`: Defines a security path or endpoint pattern. It specifies that the security settings apply to all paths under the "/api/" path. The `**` is a wildcard that means it includes all subpaths under "/api/\*\*".
* `SECURITY_PATHAUTHORIZATIONS_0_ROLESALLOWED`: Specifies the roles allowed for accessing the specified path. In this case, the roles allowed are empty (""). This might imply that access to the "/api/\*\*" paths is open to all users or that no specific roles are required for authorization.
```yaml
pathAuthorizations:
- path: "/api/**"
rolesAllowed: "ANY_AUTHENTICATED_USER"
```
* `SECURITY_OAUTH2_BASE_SERVER_URL`: This setting specifies the base URL of the OpenID server, which is used for authentication and authorization.
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID`: Specifies the client ID associated with the application registered on the OpenID server for authentication and authorization.
* `SECURITY_OAUTH2_REALM`: Defines the realm for the OAuth 2.0 authorization server. The realm is a protected space where the client's resources are stored. It provides additional context for the authentication process.
### Configuring License datasource
The License Engine uses a Postgres/Oracle database to store license-related data. The following environment variables need to be set in order to connect to the database:
* `SPRING_DATASOURCE_JDBCURL`
* `SPRING_DATASOURCE_USERNAME`
* `SPRING_DATASOURCE_PASSWORD`
### Configuring Engine datasource
The License service needs to retrieve the data for a process instance from the engine database. So it needs to have all the correct information to connect to the engine database.
The following configuration details need to be added in configuration files or overwritten using environment variables:
* `ENGINE_DATASOURCE_JDBCURL`
* `ENGINE_DATASOURCE_USERNAME`
* `ENGINE_DATASOURCE_PASSWORD`
### Configuring Kafka
Kafka handles all communication between the License Engine and the FLOWX Engine. Both a producer and a consumer must be configured. The following environment variables need to be set:
* `SPRING_KAFKA_BOOTSTRAP_SERVERS` - address of the Kafka server
* `SPRING_KAFKA_CONSUMER_GROUP_ID` - group of consumers
* `KAFKA_CONSUMER_THREADS` - the number of Kafka consumer threads
* `KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL` - the interval between retries after `AuthorizationException` is thrown by `KafkaConsumer`
The configured license topic `KAFKA_TOPIC`\_`LICENSE_IN` should be the same as the `KAFKA_TOPIC_LICENSE_OUT` from the engine
### Configuring logging
The following environment variables could be set in order to control log levels:
* `LOGGING_LEVEL_ROOT` - root spring boot microservice logs
* `LOGGING_LEVEL_APP` - app level logs
### Configuring NGINX
The [configuration for the FlowX Designer](./designer-setup-guide#nginx) should be updated to also expose the REST API of the license engine by adding a path in `flowx-admin-plugins-subpaths`
```yaml
- path: /license(/|$)(.*)
backend:
serviceName: license-core
servicePort: 80
```
# Deployment configuration for OpenTelemetry
Guide to deploying OpenTelemetry components and configuring associated services.
### Step 1: Install OpenTelemetry Operator
Ensure you have the OpenTelemetry Operator version **0.56.1** or higher:
```yaml
- repoURL: https://open-telemetry.github.io/opentelemetry-helm-charts
chart: opentelemetry-operator
targetRevision: 0.56.1
```
#### Configuration:
```yaml
# Source: https://github.com/open-telemetry/opentelemetry-helm-charts/blob/opentelemetry-operator-0.56.1/charts/opentelemetry-operator/values.yaml
## Provide OpenTelemetry Operator manager container image and resources.
manager:
image:
repository: ghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator
tag: ""
collectorImage:
repository: "otel/opentelemetry-collector-contrib"
tag: 0.98.0
opampBridgeImage:
repository: ""
tag: ""
targetAllocatorImage:
repository: ""
tag: ""
autoInstrumentationImage:
java:
repository: ""
tag: ""
nodejs:
repository: ""
tag: ""
python:
repository: ""
tag: ""
dotnet:
repository: ""
tag: ""
# The Go instrumentation support in the operator is disabled by default.
# To enable it, use the operator.autoinstrumentation.go feature gate.
go:
repository: ""
tag: ""
# Feature Gates are a comma-delimited list of feature gate identifiers.
# Prefix a gate with '-' to disable support.
# Prefixing a gate with '+' or no prefix will enable support.
# A full list of valid identifiers can be found here: https://github.com/open-telemetry/opentelemetry-operator/blob/main/pkg/featuregate/featuregate.go
featureGates: ""
ports:
metricsPort: 8080
webhookPort: 9443
healthzPort: 8081
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 64Mi
## Adds additional environment variables
## e.g ENV_VAR: env_value
env:
ENABLE_WEBHOOKS: "true"
## Admission webhooks make sure only requests with correctly formatted rules will get into the Operator.
## They also enable the sidecar injection for OpenTelemetryCollector and Instrumentation CR's
admissionWebhooks:
create: true
servicePort: 443
failurePolicy: Fail
```
Ensure you use the appropriate distribution and version:
```yaml
repository: "otel/opentelemetry-collector-contrib"
tag: 0.98.0
```
### Step 2: Deploy OpenTelemetry Resources
Apply the OpenTelemetry resources:
#### OpenTelemetry Collector
```yaml
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
namespace: {{ .Release.Namespace }}
spec:
mode: deployment
resources:
limits:
cpu: "2"
memory: 6Gi
requests:
cpu: 200m
memory: 2Gi
config: |
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
# Since this collector needs to receive data from the web, enable cors for all origins
# `allowed_origins` can be refined for your deployment domain
endpoint: 0.0.0.0:4318
cors:
allowed_origins:
- "http://*"
- "https://*"
zipkin:
exporters:
debug: {}
## Create an exporter to Jaeger using the standard `otlp` export format
otlp/tempo:
endpoint: 'otel-tempo:4317'
tls:
insecure: true
# Create an exporter to Prometheus (metrics)
otlphttp/prometheus:
endpoint: 'http://otel-prometheus-server:9090/api/v1/otlp'
tls:
insecure: true
loki:
endpoint: "http://otel-loki:3100/loki/api/v1/push"
default_labels_enabled:
exporter: true
job: true
extensions:
health_check:
memory_ballast: {}
processors:
batch: {}
k8sattributes:
extract:
metadata:
- k8s.namespace.name
- k8s.deployment.name
- k8s.statefulset.name
- k8s.daemonset.name
- k8s.cronjob.name
- k8s.job.name
- k8s.node.name
- k8s.pod.name
- k8s.pod.uid
- k8s.pod.start_time
passthrough: false
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- sources:
- from: resource_attribute
name: k8s.pod.uid
- sources:
- from: connection
resource:
attributes:
- key: service.instance.id
from_attribute: k8s.pod.uid
action: insert
memory_limiter:
check_interval: 5s
limit_percentage: 80
spike_limit_percentage: 25
connectors:
spanmetrics: {}
service:
extensions:
- health_check
- memory_ballast
pipelines:
traces:
processors: [memory_limiter, resource, batch]
exporters: [otlp/tempo, debug, spanmetrics]
receivers: [otlp]
metrics:
receivers: [otlp, spanmetrics]
processors: [memory_limiter, resource, batch]
exporters: [otlphttp/prometheus, debug]
logs:
processors: [memory_limiter, resource, batch]
exporters: [loki, debug]
receivers: [otlp]
```
#### OpenTelemetry Instrumentation
```yaml
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: flowx-otel-instrumentation
namespace: {{ .Release.Namespace }}
spec:
exporter:
endpoint: http://otel-collector:4317
propagators:
- tracecontext
- baggage
sampler:
type: parentbased_traceidratio
argument: "1"
java:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:2.1.0
env:
- name: OTEL_INSTRUMENTATION_LOGBACKAPPENDER_ENABLED
value: "true"
- name: OTEL_LOGS_EXPORTER
value: otlp
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://otelcol-operator-collector:4317
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: grpc
```
### Step 3: Instrument Flowx Services
Update Flowx services to enable Java instrumentation by adding the following pod annotation:
```yaml
podAnnotations:
instrumentation.opentelemetry.io/inject-java: "true"
```
### Step 4: Configure Grafana, Prometheus and Tempo
#### Configure Grafana
```yaml
##Source: https://github.com/grafana/helm-charts/blob/grafana-7.3.10/charts/grafana/values.yaml
rbac:
create: true
namespaced: true
serviceAccount:
create: true
replicas: 1
ingress:
enabled: true
ingressClassName: nginx
path: /
hosts:
- {{ .Values.flowx.ingress.grafana }}
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 5Gi
finalizers: []
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Loki
uid: loki
type: loki
access: proxy
url: http://otel-loki:3100
timeout: 300
version: 1
jsonData:
derivedFields:
- datasourceUid: tempo
# matches variations of "traceID" fields, either in JSON or logfmt
# marian-fx: matcherRegex: '"traceid":"(\w+)"'
matcherRegex: (?:[Tt]race[-_]?[Ii][Dd])[\\]?["]?[=:][ ]?[\\]?["]?(\w+)
name: traceid
# url will be interpreted as query for the datasource
url: "$${__value.raw}"
urlDisplayLabel: See traces
- name: Prometheus
uid: prometheus
type: prometheus
url: 'http://otel-prometheus-server:9090'
editable: true
isDefault: true
jsonData:
exemplarTraceIdDestinations:
- datasourceUid: tempo
name: traceid
- name: Tempo
uid: tempo
type: tempo
access: proxy
url: http://otel-tempo:3100
version: 1
jsonData:
tracesToLogs:
datasourceUid: loki
lokiSearch:
datasourceUid: loki
nodeGraph:
enabled: true
serviceMap:
datasourceUid: prometheus
tracesToLogsV2:
customQuery: false
datasourceUid: loki
filterBySpanID: true
filterByTraceID: true
spanEndTimeShift: 1s
spanStartTimeShift: '-1s'
tags:
- key: service.name
value: job
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/default
resources:
limits:
memory: 150Mi
grafana.ini:
auth:
disable_login_form: true
auth.anonymous:
enabled: false
org_name: Main Org.
org_role: Admin
auth.generic_oauth:
enabled: true
name: Keycloak-OAuth
allow_sign_up: true
client_id: private-management-sa
client_secret: xTs4yGYySrHaNDIpCiniHJUGqBKbyCtp
scopes: openid email profile offline_access roles
email_attribute_path: email
login_attribute_path: username
name_attribute_path: full_name
auth_url: https://{{ .Values.flowx.keycloak.host }}/auth/realms/{{ .Values.flowx.keycloak.realm }}/protocol/openid-connect/auth
token_url: https://{{ .Values.flowx.keycloak.host }}/auth/realms/{{ .Values.flowx.keycloak.realm }}/protocol/openid-connect/token
api_url: https://{{ .Values.flowx.keycloak.host }}/auth/realms/{{ .Values.flowx.keycloak.realm }}/protocol/openid-connect/userinfo
# role_attribute_path: contains(roles[*], 'admin') && 'Admin' || contains(roles[*], 'editor') && 'Editor' || 'Viewer'
server:
root_url: "https://{{ .Values.flowx.ingress.grafana }}/grafana"
serve_from_sub_path: true
adminPassword: admin
# assertNoLeakedSecrets is a helper function defined in _helpers.tpl that checks if secret
# values are not exposed in the rendered grafana.ini configmap. It is enabled by default.
#
# To pass values into grafana.ini without exposing them in a configmap, use variable expansion:
# https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#variable-expansion
#
# Alternatively, if you wish to allow secret values to be exposed in the rendered grafana.ini configmap,
# you can disable this check by setting assertNoLeakedSecrets to false.
assertNoLeakedSecrets: false
```
#### Configure Prometheus
```yaml
# https://github.com/prometheus-community/helm-charts/blob/prometheus-22.6.7/charts/prometheus/values.yaml
rbac:
create: true
podSecurityPolicy:
enabled: false
## Define serviceAccount names for components. Defaults to component's fully qualified name.
##
serviceAccounts:
server:
create: true
alertmanager:
enabled: false
alertmanagerFiles:
alertmanager.yml:
global: {}
# slack_api_url: ''
receivers:
- name: default-receiver
# slack_configs:
# - channel: '@you'
# send_resolved: true
route:
group_wait: 10s
group_interval: 5m
receiver: default-receiver
repeat_interval: 3h
configmapReload:
prometheus:
enabled: false
kube-state-metrics:
enabled: false
prometheus-node-exporter:
enabled: false
prometheus-pushgateway:
enabled: false
server:
useExistingClusterRoleName: prometheus-server
## If set it will override prometheus.server.fullname value for ClusterRole and ClusterRoleBinding
##
clusterRoleNameOverride: ""
# Enable only the release namespace for monitoring. By default all namespaces are monitored.
# If releaseNamespace and namespaces are both set a merged list will be monitored.
releaseNamespace: false
## namespaces to monitor (instead of monitoring all - clusterwide). Needed if you want to run without Cluster-admin privileges.
# namespaces:
# - namespace
extraFlags:
- "web.enable-lifecycle"
- "enable-feature=exemplar-storage"
- "enable-feature=otlp-write-receiver"
global:
scrape_interval: 5s
scrape_timeout: 3s
evaluation_interval: 30s
persistentVolume:
enabled: true
mountPath: /data
## Prometheus server data Persistent Volume size
##
size: 10Gi
service:
servicePort: 9090
## Prometheus server resource requests and limits
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
limits:
cpu: 2000m
memory: 4096Mi
requests:
cpu: 500m
memory: 2048Mi
## Prometheus data retention period (default if not specified is 15 days)
##
retention: "3d"
## Prometheus' data retention size. Supported units: B, KB, MB, GB, TB, PB, EB.
##
retentionSize: ""
serverFiles:
prometheus.yml:
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
- job_name: 'otel-collector'
honor_labels: true
kubernetes_sd_configs:
- role: pod
namespaces:
own_namespace: true
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_opentelemetry_community_demo]
action: keep
regex: true
```
#### Configure Loki with a Minio backend
```yaml
# https://raw.githubusercontent.com/grafana/loki/helm-loki-6.2.0/production/helm/loki/single-binary-values.yaml
loki:
auth_enabled: false # https://grafana.com/docs/loki/latest/operations/multi-tenancy/#multi-tenancy
commonConfig:
replication_factor: 1
schemaConfig:
configs:
- from: 2024-04-01
store: tsdb
object_store: s3
schema: v13
index:
prefix: loki_index_
period: 24h
ingester:
chunk_encoding: snappy
tracing:
enabled: true
querier:
# Default is 4, if you have enough memory and CPU you can increase, reduce if OOMing
max_concurrent: 2
deploymentMode: SingleBinary
singleBinary:
replicas: 1
resources:
limits:
cpu: 3
memory: 4Gi
requests:
cpu: 1
memory: 2Gi
extraEnv:
# Keep a little bit lower than memory limits
- name: GOMEMLIMIT
value: 3750MiB
chunksCache:
# default is 500MB, with limited memory keep this smaller
writebackSizeLimit: 10MB
# Enable minio for storage
minio:
enabled: true
rootUser: enterprise-logs
rootPassword: supersecret
buckets:
- name: chunks
policy: none
purge: false
- name: ruler
policy: none
purge: false
- name: admin
policy: none
purge: false
persistence:
size: 10Gi
lokiCanary:
enabled: false
# Zero out replica counts of other deployment modes
backend:
replicas: 0
read:
replicas: 0
write:
replicas: 0
ingester:
replicas: 0
querier:
replicas: 0
queryFrontend:
replicas: 0
queryScheduler:
replicas: 0
distributor:
replicas: 0
compactor:
replicas: 0
indexGateway:
replicas: 0
bloomCompactor:
replicas: 0
bloomGateway:
replicas: 0
```
#### Configure Tempo
```yaml
# https://github.com/grafana/helm-charts/blob/tempo-1.7.2/charts/tempo/values.yaml
tempo:
# configure a 3 days retention by default
retention: 72h
# enable opentelemetry protocol & jaeger receivers
# this configuration will listen on all ports and protocols that tempo is capable of.
# the receives all come from the OpenTelemetry collector. more configuration information can
# be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/master/receiver
receivers:
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_binary:
endpoint: 0.0.0.0:6832
thrift_compact:
endpoint: 0.0.0.0:6831
thrift_http:
endpoint: 0.0.0.0:14268
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
http:
endpoint: "0.0.0.0:4318"
persistence:
enabled: true
size: 10Gi
# -- Pod Annotations
podAnnotations:
prometheus.io/port: prom-metrics
prometheus.io/scrape: "true"
```
# Open Telemetry default properties
* `otel.resource.attributes=service.name=flowx-process-engine,service.version=1.1.1`: Environment variable.
Will be overridden as environment variable by Kubernetes operator. Useful for local development.
### Java agent configuration
* `otel.javaagent.enabled=true`
* `otel.javaagent.logging=simple`
* `otel.javaagent.debug=false`
### Disable OTEL SDK
* `otel.sdk.disabled=false`
## Exporters configuration (common config for all exporters)
* `otel.traces.exporter=otlp`
* `otel.metrics.exporter=otlp`
* `otel.logs.exporter=otlp`
### OTLP exporter
* `otel.exporter.otlp.endpoint=http://localhost:4317`
Endpoint will be overridden by Kubernetes operator. Useful for local development.
* `otel.exporter.otlp.protocol=grpc`
* `otel.exporter.otlp.timeout=10000`
* `otel.exporter.otlp.compression=gzip`
* `otel.exporter.otlp.metrics.temporality.preference=cumulative`
* `otel.exporter.otlp.metrics.default.histogram.aggregation=explicit_bucket_histogram`
### Tracer provider
`SdkTracerProvider` specific configuration options.
* Sampler: [**here**](https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk-extensions/autoconfigure/README.md#sampler)
* The default sampler is: `parentbased_always_on`
* To disable FlowX technical spans, add sampler: `fxTechnicalSpanFilterSampler`
* `otel.traces.sampler=parentbased_always_on`
### Batch span processor
* `otel.bsp.schedule.delay=5000`
* `otel.bsp.max.queue.size=2048`
* `otel.bsp.max.export.batch.size=512`
* `otel.bsp.export.timeout=30000`
### Meter provider
The following configuration options are specific to `SdkMeterProvider`.
* `otel.metric.export.interval=60000`
* `otel.metric.export.timeout=10000`
### Logger provider
The following configuration options are specific to `SdkLoggerProvider`.
* `otel.blrp.schedule.delay=1000`
* `otel.blrp.max.queue.size=2048`
* `otel.blrp.max.export.batch.size=512`
* `otel.blrp.export.timeout=30000`
## Agent auto-instrumentation
* `otel.instrumentation.messaging.experimental.receive-telemetry.enabled=false`
* `otel.instrumentation.common.experimental.controller-telemetry.enabled=true`
* `otel.instrumentation.common.experimental.view-telemetry.enabled=true`
* `otel.instrumentation.common.default-enabled=false`
Disable all auto instrumentation and enable only what's necessary. This has to be commented out entirely to work as default.
### Disable annotated methods
* `otel.instrumentation.opentelemetry-instrumentation-annotations.exclude-methods=my.package.MyClass1[method1,method2];my.package.MyClass2[method3]`
### Instrumentation config per library
Some instrumentation relies on other instrumentation to function properly. When selectively enabling instrumentation, be sure to enable the transitive dependencies too.
* `otel.instrumentation.opentelemetry-api.enabled=true`
* `otel.instrumentation.opentelemetry-instrumentation-annotations.enabled=true`
* `otel.instrumentation.opentelemetry-extension-annotations.enabled=false`
* `otel.instrumentation.methods.enabled=true`
* `otel.instrumentation.external-annotations.enabled=true`
* `otel.instrumentation.kafka.enabled=true`
* `otel.instrumentation.tomcat.enabled=true`
* `otel.instrumentation.elasticsearch-transport.enabled=true`
* `otel.instrumentation.elasticsearch-rest.enabled=true`
* `otel.instrumentation.grpc.enabled=true`
* `otel.instrumentation.hibernate.enabled=false`
Hibernate and JDBC kind of duplicate the queries traces
* `otel.instrumentation.hikaricp.enabled=false`
* `otel.instrumentation.java-http-client.enabled=true`
* `otel.instrumentation.http-url-connection.enabled=true`
* `otel.instrumentation.jdbc.enabled=false`
* `otel.instrumentation.jdbc-datasource.enabled=false`
* `otel.instrumentation.runtime-telemetry.enabled=true`
* `otel.instrumentation.servlet.enabled=true`
* `otel.instrumentation.executors.enabled=true`
* `otel.instrumentation.java-util-logging.enabled=true`
* `otel.instrumentation.log4j-appender.enabled=true`
* `otel.instrumentation.log4j-mdc.enabled=true`
* `otel.instrumentation.log4j-context-data.enabled=true`
* `otel.instrumentation.logback-appender.enabled=true`
* `otel.instrumentation.logback-mdc.enabled=true`
* `otel.instrumentation.mongo.enabled=true`
* `otel.instrumentation.rxjava.enabled=false`
* `otel.instrumentation.reactor.enabled=false`
### Redis client imported by spring-redis-data
* `otel.instrumentation.lettuce.enabled=true`
## Spring instrumentation props
* `otel.instrumentation.spring-batch.enabled=false`
* `otel.instrumentation.spring-core.enabled=true`
* `otel.instrumentation.spring-data.enabled=true`
* `otel.instrumentation.spring-jms.enabled=false`
* `otel.instrumentation.spring-integration.enabled=false`
* `otel.instrumentation.spring-kafka.enabled=true`
* `otel.instrumentation.spring-rabbit.enabled=false`
* `otel.instrumentation.spring-rmi.enabled=false`
* `otel.instrumentation.spring-scheduling.enabled=false`
* `otel.instrumentation.spring-web.enabled=true`
* `otel.instrumentation.spring-webflux.enabled=false`
* `otel.instrumentation.spring-webmvc.enabled=true`
* `otel.instrumentation.spring-ws.enabled=false`
# Documents plugin setup
The Documents plugin provides functionality for generating, persisting, combining, and manipulating documents within the FlowX.AI system.
The plugin is available as a docker image.
## Dependencies
Before setting up the plugin, ensure that you have the following dependencies installed and configured:
* [PostgreSQL](https://www.postgresql.org/) Database: You will need a PostgreSQL database to store data related to document templates and documents.
* [MongoDB](https://www.mongodb.com/2) Database: MongoDB is required for the HTML templates feature of the plugin.
* Kafka: Establish a connection to the Kafka instance used by the FLOWX.AI engine.
* [Redis](https://redis.io/): Set up a Redis instance for caching purposes.
* S3-Compatible File Storage Solution: Deploy an S3-compatible file storage solution, such as [Min.io](https://min.io/), to store document files.
## Configuration
The plugin comes with pre-filled configuration properties, but you need to set up a few custom environment variables to tailor it to your specific setup. Here are the key configuration steps:
### Postgres database
Configure the basic Postgres settings in the `values.yaml` file:
```yaml
documentdb:
existingSecret: {{secretName}}
metrics:
enabled: true
service:
annotations:
prometheus.io/port: {{phrometeus port}}
prometheus.io/scrape: "true"
type: ClusterIP
serviceMonitor:
additionalLabels:
release: prometheus-operator
enabled: true
interval: 30s
scrapeTimeout: 10s
persistence:
enabled: true
size: 4Gi
postgresqlDatabase: document
postgresqlUsername: postgres
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 200m
memory: 256Mi
service:
annotations:
fabric8.io/expose: "false"
```
### Redis server
The plugin can utilize the [Redis component](https://app.gitbook.com/@flowx-ai/s/flowx-docs/flowx-engine/setup-guide#2-redis-server) already deployed for the FLOWX.AI engine. Make sure it is configured properly.
### Document storage
Ensure that you have a deployed S3-compatible file storage solution, such as Min.io, which will be used to store document files.
### Authorization configuration
To connect to the identity management platform, set the following environment variables:
* `SECURITY_OAUTH2_BASE_SERVER_URL`
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID`
* `SECURITY_OAUTH2_REALM`
### Enable HTML template types
If you want to use HTML templates for documents, set the `FLOWX_HTML_TEMPLATES_ENABLED` environment variable to true.
### Datasource configuration
The service uses a Postgres/Oracle database to store data related to document templates and documents. Configure the following details using environment variables:
* `SPRING_DATASOURCE_URL`: The URL for the Postgres/Oracle database.
* `SPRING_DATASOURCE_USERNAME`: The username for the database connection.
* `SPRING_DATASOURCE_PASSWORD`: The password for the database connection.
* `SPRING_JPA_PROPERTIES_HIBERNATE_DEFAULT_SCHEMA`: Use this property to overwrite the name of the database schema if needed.
Ensure that the user, password, connection URL, and database name are correctly configured to avoid startup errors. The datasource is automatically configured using a Liquibase script within the engine, including migration scripts.
### MongoDB configuration
Configure the MongoDB database access information by setting the `SPRING_DATA_MONGODB_URI` environment variable to the MongoDB database URI.
### Redis configuration
Set the following values with the corresponding Redis-related values:
* `SPRING_REDIS_HOST`: The host address of the Redis server.
* `SPRING_REDIS_PASSWORD`: The password for the Redis server, if applicable.
* `REDIS_TTL`: The time-to-live (TTL) value for Redis cache entries.
### Conversion
Configuration available starting with **3.4.7** platform version.
* `FLOWX_CONVERT_DPI`: Sets the DPI (dots per inch) for PDF to JPEG conversion. Higher values result in higher resolution images. (Default value: `150`).
### Kafka configuration
Set the following Kafka-related configurations using environment variables:
* `SPRING_KAFKA_BOOTSTRAP_SERVERS`: The address of the Kafka server.
* `SPRING_KAFKA_CONSUMER_GROUP_ID`: The group ID for Kafka consumers.
* `KAFKA_CONSUMER_THREADS`: The number of Kafka consumer threads to use.
* `KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL`: The interval between retries after a `AuthorizationException` is thrown by `KafkaConsumer`.
* `KAFKA_MESSAGE_MAX_BYTES`: The maximum size of a message that can be received by the broker from a producer.
Each action in the service corresponds to a Kafka event. Configure a separate Kafka topic for each use case.
#### Generate
* `KAFKA_TOPIC_DOCUMENT_GENERATE_HTML_IN`: This Kafka topic is used for messages related to generating HTML documents (the topic that listens for the request from the engine)
* `KAFKA_TOPIC_DOCUMENT_GENERATE_HTML_OUT`: This Kafka topic is used for messages related to generating HTML documents (the topic on which the engine will expect the reply)
* `KAFKA_TOPIC_DOCUMENT_GENERATE_PDF_IN`: This Kafka topic is used for the input messages related to generating PDF documents (the topic that listens for the request from the engine)
* `KAFKA_TOPIC_DOCUMENT_GENERATE_PDF_OUT`: This Kafka topic is used for the output messages related to generating PDF documents, it produces messages with the result of generating a PDF document (the topic on which the engine will expect the reply)
#### Persist (uploading a file/document)
* `KAFKA_TOPIC_FILE_PERSIST_IN`: This Kafka topic is used for the input messages related to persisting files, it receives messages indicating the request to persist a file (the topic that listens for the request from the engine)
* `KAFKA_TOPIC_FILE_PERSIST_OUT`: This Kafka topic is used for the output messages related to persisting files, it produces messages with the result of persisting a file (the topic on which the engine will expect the reply)
* `KAFKA_TOPIC_DOCUMENT_PERSIST_IN`: This Kafka topic is used for the input messages related to persisting documents, it receives messages indicating the request to persist a document (the topic that listens for the request from the engine)
* `KAFKA_TOPIC_DOCUMENT_PERSIST_OUT`: This Kafka topic is used for the output messages related to persisting documents, it produces messages with the result of persisting a document (the topic that listens for the request from the engine)
#### Split
* `KAFKA_TOPIC_DOCUMENT_SPLIT_IN`: This Kafka topic is used for the input messages related to splitting documents, it receives messages indicating the request to split a document into multiple parts (the topic that listens for the request from the engine)
* `KAFKA_TOPIC_DOCUMENT_SPLIT_OUT`: This Kafka topic is used for the output messages related to splitting documents, it produces messages with the result of splitting a document (the topic on which the engine will expect the reply)
#### Combine
* `KAFKA_TOPIC_FILE_COMBINE_IN`: This Kafka topic is used for the input messages related to combining files, it receives messages indicating the request to combine multiple files into a single file (the topic that listens for the request from the engine)
* `KAFKA_TOPIC_FILE_COMBINE_OUT`: This Kafka topic is used for the output messages related to combining files, it produces messages with the result of combining files (the topic on which the engine will expect the reply)
#### Get
* `KAFKA_TOPIC_DOCUMENT_GET_URLS_IN`: This Kafka topic is used for the input messages related to retrieving URLs for documents, it receives messages indicating the request to retrieve the URLs of documents (the topic that listens for the request from the engine)
* `KAFKA_TOPIC_DOCUMENT_GET_URLS_OUT`: This Kafka topic is used for the output messages related to retrieving URLs for documents, it produces messages with the result of retrieving the URLs of documents (the topic on which the engine will expect the reply)
#### Delete
* `KAFKA_TOPIC_FILE_DELETE_IN`: This Kafka topic is used for the input messages related to deleting files, it receives messages indicating the request to delete a file (the topic that listens for the request from the engine)
* `KAFKA_TOPIC_FILE_DELETE_OUT`: This Kafka topic is used for the output messages related to deleting files, it produces messages with the result of deleting a file (the topic on which the engine will expect the reply)
#### OCR
* `KAFKA_TOPIC_OCR_OUT`: This Kafka topic is used for the output messages related to optical character recognition (OCR), it produces messages with the OCR results (the topic on which the engine will expect the reply)
* `KAFKA_TOPIC_OCR_IN`: This Kafka topic is used for the input messages related to optical character recognition (OCR), it receives messages indicating the request to perform OCR on a document (the topic that listens for the request from the engine)
Ensure that the Engine is listening to messages on topics with specific patterns. Use the correct outgoing topic names when configuring the documents plugin.
Each of these Kafka topics corresponds to a specific action or functionality within the service, allowing communication and data exchange between different components or services in a decoupled manner.
### File storage configuration
Depending on your use case, you can choose either a file system or an S3-compatible cloud storage solution for document storage. Configure the file storage solution using the following environment variables:
* `APPLICATION_FILE_STORAGE_PARTITION_STRATEGY`: Set the partition strategy for file storage. Use `NONE` to save documents in `minio/amazon-s3` as before, with a bucket for each process instance. Use `PROCESS_DATE` to save documents in a single bucket with a subfolder structure, for example: `bucket/2022/2022-07-04/process-id-xxxx/customer-id/file.pdf`.
* `APPLICATION_FILE_STORAGE_DELETION_STRATEGY` (default value: **delete**): This will keep the current behaviour of deleting the temporary files.
Other possible values:
* **disabled**: This will disable entirely the deletion of temporary files from the temporary bucket, and the responsibility to delete and clean up the bucket will move in the ownership of the admins of the implementing project.
* **deleteBypassingGovernanceRetention**: This will still delete the temporary files and further more will add in the delete request the header: `x-amz-bypass-governance-retention:true` , to enable deletion of governed files, in case the s3 configured user for document-plugin, will have the `s3:BypassGovernanceRetention` permission.
* `APPLICATION_FILE_STORAGE_S3_SERVER_URL`: The URL of the S3-compatible server.
* `APPLICATION_FILE_STORAGE_S3_ACCESS_KEY`: The access key for the S3-compatible server.
* `APPLICATION_FILE_STORAGE_S3_SECRET_KEY`: The secret key for the S3-compatible server.
* `APPLICATION_FILE_STORAGE_S3_BUCKET_PREFIX`: The prefix to use for S3 bucket names.
* `APPLICATION_FILE_STORAGE_S3_TEMP_BUCKET`: Upon file upload, the initial destination is a sandbox, from which it is subsequently transferred to the designated bucket.
Make sure to follow the recommended [bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html) when choosing the bucket prefix name.
### Setting maximum file size
To control the maximum file size permitted for uploads, configure the `SPRING_SERVLET_MULTIPART_MAX_FILE_SIZE` and `SPRING_SERVLET_MULTIPART_MAX_REQUEST_SIZE` variables.
The limit is set by default to 50MB:
```yml
spring:
servlet:
contextPath: /
multipart:
max-file-size: ${MULTIPART_MAX_FILE_SIZE:50MB} #increase the multipart file size on the request
max-request-size: ${MULTIPART_MAX_FILE_SIZE:50MB} #increase the request size
```
### Custom font path for PDF templates
Set the `FLOWX_HTML_TEMPLATES_PDF_FONT_PATHS` config to select the font used for generating documents based on PDF templates.
### Custom font paths for PDF templates
If you want to use specific fonts in your PDF templates, override the `FLOWX_HTML_TEMPLATES_PDF_FONT_PATHS` config. By default, Calibri and DejaVuSans are available fonts.
After making these configurations, the fonts will be available for use within PDF templates.
### Logging
The following environment variables could be set in order to control log levels:
* `LOGGING_LEVEL_ROOT`: Controls the log level for root Spring Boot microservice logs.
* `LOGGING_LEVEL_APP`: Controls the log level for application-specific logs.
* `LOGGING_LEVEL_MONGO_DRIVER`: Controls the log level for MongoDB driver logs.
Adjust these variables according to your logging requirements.
# Notifications plugin setup
The Notifications plugin is available as a docker image, and it has the following dependencies.
* a [MongoDB](https://www.mongodb.com/2) database
* needs to be able to connect to the Kafka instance used by the engine
* a [Redis](https://redis.io/) instance for caching notification templates
* in case you need to also attach documents to the sent notifications, the plugin will need to be able to access your chosen storage solution. It can use an S3 compatible file storage solution (we have successfully used [Min.io](https://min.io/))
The plugin comes with most of the needed configuration properties filled in, but there are a few that need to be set up using some custom environment variables.
## Dependencies
### Mongo database
Basic mongo configuration - helm values.yaml
```yaml
notification-mdb:
existingSecret: {{secretName}}
mongodbDatabase: {{NotificationDatabaseName}}
mongodbUsername: {{NotificationDatabaseUser}}
persistence:
enabled: true
mountPath: /bitnami/mongodb
size: 4Gi
replicaSet:
enabled: true
name: rs0
pdb:
enabled: true
minAvailable:
arbiter: 1
secondary: 1
replicas:
arbiter: 1
secondary: 1
useHostnames: true
serviceAccount:
create: false
usePassword: true
```
### Redis server
The plugin can use the [Redis component](https://app.gitbook.com/@flowx-ai/s/flowx-docs/flowx-engine/setup-guide#2-redis-server) already deployed for the engine.
### Document storage
You need to have an S3 compatible file storage solution deployed in your setup.
## Configuration
### Authorization configuration
The following variables need to be set in order to connect to the identity management platform:
`SECURITY_OAUTH2_BASE_SERVER_URL`
`SECURITY_OAUTH2_CLIENT_CLIENT_ID`
`SECURITY_OAUTH2_CLIENT_CLIENT_SECRET`
`SECURITY_OAUTH2_REALM`
### MongoDB configuration
The only thing that needs to be configured is the DB access info, the rest will be handled by the plugin.
`SPRING_DATA_MONGODB_URI` - the URI for the MongoDB database
### Redis configuration
The following values should be set with the corresponding Redis related values.
`SPRING_REDIS_HOST`
`SPRING_REDIS_PASSWORD`
`REDIS_TTL`
### Kafka configuration
The following Kafka related configurations can be set by using environment variables:
`SPRING_KAFKA_BOOTSTRAP_SERVERS` - address of the Kafka server
`SPRING_KAFKA_CONSUMER_GROUP_ID` - group of consumers
`KAFKA_CONSUMER_THREADS` - the number of Kafka consumer threads
`KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL` - the interval between retries after `AuthorizationException` is thrown by `KafkaConsumer`
`KAFKA_MESSAGE_MAX_BYTES` - this is the largest size of the message that can be received by the broker from a producer.
Each action available in the service corresponds to a Kafka event. A separate Kafka topic must be configured for each use-case.
#### **Forwarding notifications to an external system**
`KAFKA_TOPIC_NOTIFICATION_EXTERNAL_OUT` - the notification will be forwarded on this topic to be handled by an external system
#### **Sending a notification**
`KAFKA_TOPIC_NOTIFICATION_INTERNAL_IN` - topic used to trigger the request to send a notification
`KAFKA_TOPIC_NOTIFICATION_INTERNAL_OUT` - topic used for sending replies after sending the notification
#### **Generating/validating an OTP**
`KAFKA_TOPIC_OTP_GENERATE_IN`
`KAFKA_TOPIC_OTP_GENERATE_OUT` - after the OTP was generated and send to the user, this is the topic used to send the response back to the Engine.
`KAFKA_TOPIC_OTP_VALIDATE_IN` - Event send on this topic with an OTP and an identifier will check if the OTP is valid
`KAFKA_TOPIC_OTP_VALIDATE_OUT` - Response to the request to validate an OTP will be sent back to the Engine on this topic
The Engine is listening for messages on topics with names of a certain pattern, make sure to use correct outgoing topic names when configuring the notifications plugin.
### File storage configuration
Based on use case you can use directly a file system or an S3 compatible cloud storage solution (for example [min.io](http://min.io/)).
The file storage solution can be configured using the following environment variables:
`APPLICATION_FILE_STORAGE_S3_SERVER_URL`
`APPLICATION_FILE_STORAGE_S3_ACCESS_KEY`
`APPLICATION_FILE_STORAGE_S3_SECRET_KEY`
`APPLICATION_FILE_STORAGE_S3_BUCKET_PREFIX`
### SMTP Setup
If you want to use a custom SMTP server:
`SIMPLEJAVAMAIL_SMTP_HOST` - used by mail servers to send, receive, and/or relay outgoing mail between email senders and receivers
`SIMPLEJAVAMAIL_SMTP_PORT` - refers to the specific part of the Internet address that’s used to transfer email
`SIMPLEJAVAMAIL_SMTP_USERNAME`
`SIMPLEJAVAMAIL_SMTP_PASSWORD`
`SIMPLEJAVAEMAIL_TRANSPORTSTRATEGY` - sets the method on how the notifications are delivered, for example `EXTERNAL_FORWARD` for forwarding to external adapters
The email and name to be used as sender for emails sent by the plugin:
`APPLICATION_MAIL_FROM_EMAIL`
`APPLICATION_MAIL_FROM_NAME`
### Email attachments configuration
The maximum file size for files to be attached as email attachments can also be configured:
`SPRING_HTTP_MULTIPART_MAX_FILE_SIZE`
`SPRING_HTTP_MULTIPART_MAX_REQUEST_SIZE`
### OTP Configuration
The desired character size and expiration time of the generated one-time-passwords can also be configured.
`FLOWX_OTP_LENGTH` - the number of characters allowed for OTP
`FLOWX_OTP_EXPIRE_TIME_IN_SECONDS` - expiry time (seconds)
### Logging
The following environment variables could be set in order to control log levels:
`LOGGING_LEVEL_ROOT` - root spring boot microservice logs
`LOGGING_LEVEL_APP` - app level logs
`LOGGING_LEVEL_MONGO_DRIVER` - MongoDB driver logs
`LOGGING_LEVEL_THYMELEAF` - [Thymeleaf](https://www.thymeleaf.org/) logs, logs related to using notifications templates
### Error handling
`KAFKA_CONSUMER_ERROR_HANDLING_ENABLED`- default value: `FALSE`→ allows control on Kafka consumer applications to handle errors and failures during message consumption - When this variable is set to `true`, it enables the consumer application to handle any errors that occur during message consumption
`KAFKA_CONSUMER_ERROR_HANDLING_RETRIES`- default value: `0`→ when `KAFKA_CONSUMER_ERROR_HANDLING_ENABLED` is set to `true`, this environment variable specifies the maximum number of retries that the consumer application should attempt before giving up on processing a message; for example, if `KAFKA_CONSUMER_ERROR_HANDLING_RETRIES` is set to `5`, the consumer application will attempt to process a message up to 5 times before giving up
`KAFKA_CONSUMER_ERROR_HANDLING_RETRY_INTERVAL`- default value: `1000`→ when `KAFKA_CONSUMER_ERROR_HANDLING_ENABLED` is set to true and retries are enabled with `KAFKA_CONSUMER_ERROR_HANDLING_RETRIES`, this environment variable specifies the amount of time that the consumer application should wait before attempting to retry processing a message
For example, if KAFKA\_CONSUMER\_ERROR\_HANDLING\_RETRY\_INTERVAL is set to 5 seconds, the consumer application will wait 5 seconds before attempting to retry processing a failed message. This interval is applied to all retry attempts, so if KAFKA\_CONSUMER\_ERROR\_HANDLING\_RETRIES is set to 5 and the retry interval is 5 seconds, the consumer application will make up to 5 attempts, waiting 5 seconds between each attempt.
# OCR plugin setup
The OCR plugin is a docker image that can be deployed using the following infrastructure prerequisites.
## Infrastructure Prerequisites:
* S3 bucket or alternative (for example, minio)
* Kafka cluster
Starting with `ocr-plugin 1.X` it no longer requires RabbitMQ.
The following environment from previous releases must be removed in order to use OCR plugin: `CELERY_BROKER_URL`.
## Deployment/Configuration
To deploy the OCR plugin, you will need to deploy `ocr-plugin` helm chart with custom values file.
Most important sections are these, but more can be extracted from helm chart.
```yaml
image:
repository: /ocr-plugin
applicationSecrets: {}
replicaCount: 2
resources: {}
env: []
```
### Credentials
S3 bucket:
```yaml
applicationSecrets:
enable: true
envSecretKeyRef:
STORAGE_S3_ACCESS_KEY: access-key # default empty
STORAGE_S3_SECRET_KEY: secret-key # default empty
existingSecret: true
secretName: ocr-plugin-application-config
```
### Kafka configuration
You can override the following environment variables:
| Environment Variable | Definition | Default Value | Example |
| ------------------------------------- | ------------------------------------------------------------------------------------------------------------- | ------------- | -------------------- |
| `ENABLE_KAFKA_SASL` | Indicates whether Kafka SASL authentication is enabled | `False` | - |
| `KAFKA_ADDRESS` | The address of the Kafka bootstrap server in the format `:` | - | `kafka-server1:9092` |
| `KAFKA_CONSUME_SCHEDULE` | The interval (in seconds) at which Kafka messages are consumed | `30` | - |
| `KAFKA_INPUT_TOPIC` | The Kafka topic from which input messages are consumed | - | - |
| `KAFKA_OCR_CONSUMER_GROUPID` | The consumer group ID for the OCR Kafka consumer | `ocr_group` | - |
| `KAFKA_CONSUMER_AUTO_COMMIT` | Determines whether Kafka consumer commits offsets automatically | `True` | - |
| `KAFKA_CONSUMER_AUTO_COMMIT_INTERVAL` | The interval (in milliseconds) at which Kafka consumer commits offsets automatically | `1000` | - |
| `KAFKA_CONSUMER_TIMEOUT` | The timeout (in milliseconds) for Kafka consumer operations | `28000` | - |
| `KAFKA_CONSUMER_MAX_POLL_INTERVAL` | The maximum interval (in milliseconds) between consecutive polls for Kafka consume | `25000` | - |
| `KAFKA_CONSUMER_AUTO_OFFSET_RESET` | The strategy for resetting the offset when no initial offset is available or if the current offset is invalid | `earliest` | - |
| `KAFKA_OUTPUT_TOPIC` | The Kafka topic to which output messages are sent | - | - |
Please note that the default values and examples provided here are for illustrative purposes. Make sure to replace them with the appropriate values based on your Kafka configuration.
When configuring the OCR plugin, make sure to use the correct outgoing topic names that match [**the pattern expected by the Engine**](../flowx-engine-setup-guide/engine-setup#configuring-kafka), which listens for messages on topics with specific names.
### Authorization
You can override the following environment variables:
| Environment Variable | Definition | Default Value | Example |
| -------------------------- | ------------------------------------------------------ | ------------- | --------------------------------- |
| `OAUTH_CLIENT_ID` | The client ID for OAuth authentication | - | `your_client_id` |
| `OAUTH_CLIENT_SECRET` | The client secret for OAuth authentication | - | `your_client_secret` |
| `OAUTH_TOKEN_ENDPOINT_URI` | The URI of the token endpoint for OAuth authentication | - | `https://oauth.example.com/token` |
Please note that the default values and examples provided here are for illustrative purposes. Make sure to replace them with the appropriate values based on your OAuth authentication configuration.
### Storage (S3 configuration)
You can override the following environment variables:
| Environment Variable | Definition | Default Value | Example |
| ----------------------------------- | ------------------------------------------------------------------- | ------------- | --------------------------------------------------- |
| `STORAGE_S3_HOST` | The host address of the S3 storage service | - | `minio:9000`, `https://s3.eu-west-1.amazonaws.com/` |
| `STORAGE_S3_SECURE_CONNECTION` | Indicates whether to use a secure connection (HTTPS) for S3 storage | `False` | |
| `STORAGE_S3_LOCATION` | The location of the S3 storage service | - | `eu-west-1` |
| `STORAGE_S3_OCR_SCANS_BUCKET` | The name of the S3 bucket for storing OCR scans | - | `pdf-scans` |
| `STORAGE_S3_OCR_SIGNATURE_BUCKET` | The name of the S3 bucket for storing OCR signatures | - | `extracted-signatures` |
| `STORAGE_S3_OCR_SIGNATURE_FILENAME` | The filename pattern for extracted OCR signatures | - | `extracted_signature_{}.png` |
| `STORAGE_S3_ACCESS_KEY` | The access key for connecting to the S3 storage service | - | |
| `STORAGE_S3_SECRET_KEY` | The secret key for connecting to the S3 storage service | - | |
Please note that the default values and examples provided here are for illustrative purposes. Make sure to replace them with the appropriate values based on your S3 storage configuration.
### Performance
| Environment Variable | Definition | Default Value |
| ---------------------------- | ----------------------------------------------------------------------------------------------------------------- | ------------- |
| `ENABLE_PERFORMANCE_PAYLOAD` | When set to true, the response payload will contain performance metrics related to various stages of the process. | `true` |
#### Example
```yaml
"perf": {
"total_time": 998,
"split": {
"get_file": 248,
"extract_images": 172,
"extract_barcodes": 37,
"extract_signatures": 238,
"minio_signature_save": 301
}
}
```
### Certificates
You can override the following environment variables:
| Environment Variable | Definition | Default Value |
| -------------------- | ----------------------------------------------------------------------------------------------------------- | ----------------- |
| `REQUESTS_CA_BUNDLE` | The path to the certificate bundle file used for secure requests | `5` |
| `CERT_REQUESTS` | If no activity has occurred for a certain number of seconds, an attempt will be made to refresh the workers | `'CERT_REQUIRED'` |
### Workers Behavior
You can override the following environment variables:
| Environment Variable | Definition | Default Value |
| ------------------------ | ----------------------------------------------------------------------------------------------------------- | ------------- |
| `OCR_WORKER_COUNT` | Number of workers | `5` |
| `OCR_WORK_QUEUE_TIMEOUT` | If no activity has occurred for a certain number of seconds, an attempt will be made to refresh the workers | `10` |
If no worker is released after `OCR_WORK_QUEUE_TIMEOUT` seconds, the application will verify whether any workers have become unresponsive and need to be restarted.
If none of the workers have died, it means they are likely blocked in some process. In this case, the application will terminate all the workers and shut down itself, hoping that the container will be restarted.
### Control Aspect Ratio
| Environment Variable | Definition | Default Value |
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------- |
| `OCR_SIGNATURE_MAX_RATIO` | This variable sets the maximum acceptable aspect ratio for a signed scanned document (the OCR plugin will recognize a signature only if the document ratio is greater than or equal to the specified minimum ratio) | `1.43` |
| `OCR_SIGNATURE_MIN_RATIO` | This variable sets the minimum acceptable aspect ratio for a signed scanned document (in this context, the OCR plugin will consider a detected signature only if the document aspect ratio is less than or equal to the specified maximum ratio) | `1.39` |
The plugin has been tested with aspect ratio values between 1.38 and 1.43. However, caution is advised when using untested values outside this range, as they may potentially disrupt the functionality. Adjust these parameters at your own risk and consider potential consequences, as untested values might lead to plugin instability or undesired behavior.
# Plugins setup
To set up a plugin in your environment, you must go through the next steps.
* make sure you have all the prerequisites deployed on your environment (for example a [Redis](../../docs/platform-overview/frameworks-and-standards/event-driven-architecture-frameworks/intro-to-redis) cache instance, a DB instance, etc)
* make the necessary configurations for each plugin (DB connection data, related Kafka topic names, etc)
Once you have deployed the necessary plugins in your environment, you can start integrating them in your process definitions.
All of them listen for Kafka events sent by the **FlowX Engine** and performed certain actions depending on the received data. They can also send data back to the Engine.
Some of them require some custom templates to be configured, for these cases, a [WYSIWYG Editor](/4.0/docs/platform-deep-dive/plugins/wysiwyg) is provided.
Let's go into more details on setting up and using each of them:
# Reporting setup
The Reporting setup guide assists in configuring the reporting plugin, relying on specific dependencies and configurations.
## Dependencies
The reporting plugin, available as a Docker image, requires the following dependencies:
* **PostgreSQL**: Dedicated instance for reporting data storage.
* **Reporting-plugin Helm Chart**:
* Utilizes a Spark Application to extract data from the FLOWX.AI Engine database and populate the Reporting plugin database.
* Utilizes Spark Operator (more info [**here**](https://www.kubeflow.org/docs/components/spark-operator)).
* **Superset**:
* Requires a dedicated PostgreSQL database for its operation.
* Utilizes Redis for efficient caching.
* Exposes its user interface via an ingress.
## Reporting plugin helm chart configuration
Configuring the reporting plugin involves several steps:
### Installation of Spark Operator
1. Install the Spark Operator using Helm:
```bash
helm install local-spark-release spark-operator/spark-operator \
--namespace spark-operator --create-namespace \
--set webhook.enable=true \
--set logLevel=6
```
2. Apply RBAC configurations:
```bash
kubectl apply -f spark-rbac.yaml
```
3. Build the reporting image:
```bash
docker build ...
```
4. Update the `reporting-image` URL in the `spark-app.yml` file.
5. Configure the correct database ENV variables in the `spark-app.yml` file (check them in the above examples with/without webhook).
6. Deploy the application:
```bash
kubectl apply -f operator/spark-app.yaml
```
## Spark Operator deployment options
### Without webhook
For deployments without a webhook, manage secrets and environmental variables for security:
```yaml
sparkApplication: #Defines the Spark application configuration.
enabled: "true" #Indicates that the Spark application is enabled for deployment.
schedule: "@every 5m" #A cronJob that should run at every 5 minutes.
driver: # This section configures the driver component of the Spark application.
envVars: #Environment variables for driver setup.
ENGINE_DATABASE_USER: flowx
ENGINE_DATABASE_URL: postgresql:5432
ENGINE_DATABASE_NAME: process_engine
ENGINE_DATABASE_TYPE: postgres # To set the type of engine database, can be also changed to oracle
REPORTING_DATABASE_USER: flowx
REPORTING_DATABASE_URL: postgresql:5432
REPORTING_DATABASE_NAME: reporting
ENGINE_DATABASE_PASSWORD: "password"
REPORTING_DATABASE_PASSWORD: "password"
executor: #This section configures the executor component of the Spark application.
envVars: #Environment variables for executor setup.
ENGINE_DATABASE_USER: flowx
ENGINE_DATABASE_URL: postgresql:5432
ENGINE_DATABASE_NAME: process_engine
ENGINE_DATABASE_TYPE: postgres # To set the type of engine database, can be also changed to oracle
REPORTING_DATABASE_USER: flowx
REPORTING_DATABASE_URL: postgresql:5432
REPORTING_DATABASE_NAME: reporting
ENGINE_DATABASE_PASSWORD: "password"
REPORTING_DATABASE_PASSWORD: "password"
```
NOTE: Passwords are currently set as plain strings, which is not secure practice in a production environment.
### With webhook
When using the webhook, employ environmental variables with secrets for a balanced security approach:
```yaml
sparkApplication:
enabled: "true"
schedule: "@every 5m"
driver:
env: #Environment variables for driver setup with secrets.
ENGINE_DATABASE_USER: flowx
ENGINE_DATABASE_URL: postgresql:5432
ENGINE_DATABASE_NAME: process_engine
ENGINE_DATABASE_TYPE: postgres # To set the type of engine database, can be also changed to oracle
REPORTING_DATABASE_USER: flowx
REPORTING_DATABASE_URL: postgresql:5432
REPORTING_DATABASE_NAME: reporting
extraEnvVarsMultipleSecretsCustomKeys:
- name: postgresql-generic
secrets: #Secrets retrieved from a generic source.
ENGINE_DATABASE_PASSWORD: postgresql-password
REPORTING_DATABASE_PASSWORD: postgresql-password
executor:
env: #Environment variables for executor setup with secrets.
ENGINE_DATABASE_USER: flowx
ENGINE_DATABASE_URL: postgresql:5432
ENGINE_DATABASE_NAME: process_engine
ENGINE_DATABASE_TYPE: postgres # To set the type of engine database, can be also changed to oracle
REPORTING_DATABASE_USER: flowx
REPORTING_DATABASE_URL: postgresql:5432
REPORTING_DATABASE_NAME: reporting
extraEnvVarsMultipleSecretsCustomKeys:
- name: postgresql-generic
secrets: #Secrets retrieved from a generic source.
ENGINE_DATABASE_PASSWORD: postgresql-password
REPORTING_DATABASE_PASSWORD: postgresql-password
```
In Kubernetes-based Spark deployments managed by the Spark Operator, you can define the sparkApplication configuration to customize the behavior, resources, and environment for both the driver and executor components of Spark jobs. The driver section allows fine-tuning of parameters specifically pertinent to the driver part of the Spark application.
Below are the configurable values within the chart values.yml file (with webhook):
```yml
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: ScheduledSparkApplication
metadata:
name: reporting-plugin-spark-app
namespace: dev
labels:
app.kubernetes.io/component: reporting
app.kubernetes.io/instance: reporting-plugin
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: reporting-plugin
app.kubernetes.io/release: 0.0.1-FLOWXRELEASE
app.kubernetes.io/version: 0.0.1-FLOWXVERSION
helm.sh/chart: reporting-plugin-0.1.1-PR-9-4-20231122153650-e
spec:
schedule: '@every 5m'
concurrencyPolicy: Forbid
template:
type: Python
pythonVersion: "3"
mode: cluster
image: eu.gcr.io/prj-cicd-d-flowxai-jx-6401/reporting-plugin:0.1.1-PR-9-4-20231122153650-eb6c
imagePullPolicy: IfNotPresent
mainApplicationFile: local:///opt/spark/work-dir/main.py
sparkVersion: "3.1.1"
restartPolicy:
type: Never
onFailureRetries: 0
onFailureRetryInterval: 10
onSubmissionFailureRetries: 5
onSubmissionFailureRetryInterval: 20
driver:
cores: 1
coreLimit: 1200m
memory: 512m
labels:
version: 3.1.1
serviceAccount: spark
env:
ENGINE_DATABASE_USER: flowx
ENGINE_DATABASE_URL: postgresql:5432
ENGINE_DATABASE_NAME: process_engine
ENGINE_DATABASE_TYPE: postgres # To set the type of engine database, can be also changed to oracle
REPORTING_DATABASE_USER: flowx
REPORTING_DATABASE_URL: postgresql:5432
REPORTING_DATABASE_NAME: reporting
ENGINE_DATABASE_PASSWORD: "password"
REPORTING_DATABASE_PASSWORD: "password"
extraEnvVarsMultipleSecretsCustomKeys:
- name: postgresql-generic
secrets: #Secrets retrieved from a generic source.
ENGINE_DATABASE_PASSWORD: postgresql-password
REPORTING_DATABASE_PASSWORD: postgresql-password
executor:
cores: 1
instances: 3
memory: 512m
labels:
version: 3.1.1
env: #Environment variables for executor setup with secrets.
ENGINE_DATABASE_USER: flowx
ENGINE_DATABASE_URL: postgresql:5432
ENGINE_DATABASE_NAME: process_engine
ENGINE_DATABASE_TYPE: postgres # To set the type of engine database, can be also changed to oracle
REPORTING_DATABASE_USER: flowx
REPORTING_DATABASE_URL: postgresql:5432
REPORTING_DATABASE_NAME: reporting
extraEnvVarsMultipleSecretsCustomKeys:
- name: postgresql-generic
secrets: #Secrets retrieved from a generic source.
ENGINE_DATABASE_PASSWORD: postgresql-password
REPORTING_DATABASE_PASSWORD: postgresql-password
```
### Superset configuration
Detailed Superset Configuration Guide:
Refer to Superset Documentation for in-depth information:
## Post-installation steps
After installation, perform the following essential configurations:
### Datasource configuration
For document-related data storage, configure these environment variables:
* `SPRING_DATASOURCE_URL`
* `SPRING_DATASOURCE_USERNAME`
* `SPRING_DATASOURCE_PASSWORD`
Ensure accurate details to prevent startup errors. The Liquibase script manages schema and migrations.
### Redis configuration
The following values should be set with the corresponding Redis-related values:
* `SPRING_REDIS_HOST`
* `SPRING_REDIS_PORT`
## Keycloak configuration
To implement alternative user authentication:
* Override `AUTH_TYPE` in your `superset.yml` configuration file:
* Set `AUTH_TYPE: AUTH_OID`
* Provide the reference to your `openid-connect` realm:
* `OIDC_OPENID_REALM: 'flowx'`
With this configuration, the login page changes to a prompt where the user can select the desired OpenID provider.
### Extend the security manager
Firstly, you will want to make sure that flask stops using `flask-openid` and starts using `flask-oidc` instead.
To do so, you will need to create your own security manager that configures `flask-oidc` as its authentication provider.
```yml
extraSecrets:
keycloak_security_manager.py: |
from flask_appbuilder.security.manager import AUTH_OID
from superset.security import SupersetSecurityManager
from flask_oidc import OpenIDConnect
```
To enable OpenID in Superset, you would previously have had to set the authentication type to `AUTH_OID`.
The security manager still executes all the behavior of the super class, but overrides the OID attribute with the `OpenIDConnect` object.
Further, it replaces the default OpenID authentication view with a custom one:
```yml
from flask_appbuilder.security.views import AuthOIDView
from flask_login import login_user
from urllib.parse import quote
from flask_appbuilder.views import expose
from flask import request, redirect
class AuthOIDCView(AuthOIDView):
@expose('/login/', methods=['GET', 'POST'])
def login(self, flag=True):
sm = self.appbuilder.sm
oidc = sm.oid
superset_roles = ["Admin", "Alpha", "Gamma", "Public", "granter", "sql_lab"]
default_role = "Admin"
@self.appbuilder.sm.oid.require_login
def handle_login():
user = sm.auth_user_oid(oidc.user_getfield('email'))
if user is None:
info = oidc.user_getinfo(['preferred_username', 'given_name', 'family_name', 'email', 'roles'])
roles = [role for role in superset_roles if role in info.get('roles', [])]
roles += [default_role, ] if not roles else []
user = sm.add_user(info.get('preferred_username'), info.get('given_name', ''), info.get('family_name', ''),
info.get('email'), [sm.find_role(role) for role in roles])
login_user(user, remember=False)
return redirect(self.appbuilder.get_url_for_index)
return handle_login()
@expose('/logout/', methods=['GET', 'POST'])
def logout(self):
oidc = self.appbuilder.sm.oid
oidc.logout()
super(AuthOIDCView, self).logout()
redirect_url = request.url_root.strip('/')
# redirect_url = request.url_root.strip('/') + self.appbuilder.get_url_for_login
return redirect(
oidc.client_secrets.get('issuer') + '/protocol/openid-connect/logout?redirect_uri=' + quote(redirect_url))
```
On authentication, the user is redirected back to Superset.
### Configure Superset authentication
Finally, we need to add some parameters to the superset .yml file:
```yml
'''
---------------------------KEYCLOACK ----------------------------
'''
curr = os.path.abspath(os.getcwd())
AUTH_TYPE = AUTH_OID
OIDC_CLIENT_SECRETS = curr + '/pythonpath/client_secret.json'
OIDC_ID_TOKEN_COOKIE_SECURE = True
OIDC_REQUIRE_VERIFIED_EMAIL = True
OIDC_OPENID_REALM: 'flowx'
OIDC_INTROSPECTION_AUTH_METHOD: 'client_secret_post'
CUSTOM_SECURITY_MANAGER = OIDCSecurityManager
AUTH_USER_REGISTRATION = False
AUTH_USER_REGISTRATION_ROLE = 'Admin'
OVERWRITE_REDIRECT_URI = 'https://{{ .Values.flowx.ingress.reporting }}/oidc_callback'
'''
--------------------------------------------------------------
'''
```
# Task management setup
The plugin is available as a docker image.
It has the following dependencies:
* a [MongoDB](https://www.mongodb.com/2) database
* needs to be able to connect to the DB used by the engine
* needs to be able to connect to the Kafka instance used by the engine
* a [Redis](https://redis.io/) instance for caching
The plugin comes with most of the needed configuration properties filled in, but there are a few that need to be set up using some custom environment variables.
## Dependencies
### MongoDB database
Basic MongoDB configuration - helm values.yaml
```yaml
notification-mdb:
existingSecret: {{secretName}}
mongodbDatabase: {{TaskManagementDatabaseName}}
mongodbUsername: {{TaskManagementDatabaseUser}}
persistence:
enabled: true
mountPath: /bitnami/mongodb
size: 4Gi
replicaSet:
enabled: true
name: rs0
pdb:
enabled: true
minAvailable:
arbiter: 1
secondary: 1
replicas:
arbiter: 1
secondary: 1
useHostnames: true
serviceAccount:
create: false
usePassword: true
```
### Redis server
The plugin can use the [Redis component](../../setup-guides/setup-guides-overview#redis-configuration) already deployed for the **FlowX Engine**.
## Configuration
## Authorization configuration & access roles
The following variables need to be set in order to connect to the identity management platform:
* `SECURITY_OAUTH2_BASE_SERVER_URL`
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID`
* `SECURITY_OAUTH2_CLIENT_CLIENT_SECRET`
* `SECURITY_OAUTH2_REALM`
A specific service account should be configured in the OpenID provider to allow the Task management microservice to access realm specific data. It can be configured using the following environment variables:
### OpenID connect settings
* `SECURITY_TYPE`: Indicates that OAuth 2.0 is the chosen security type, default value: `oauth2`.
* `SECURITY_PATHAUTHORIZATIONS_0_PATH`: Defines a security path or endpoint pattern. In this case, it specifies that the security settings apply to all paths under the "/api/" path. The `**` is a wildcard that means it includes all subpaths under "/api/\*\*".
* `SECURITY_PATHAUTHORIZATIONS_0_ROLESALLOWED`: Specifies the roles allowed for accessing the specified path. In this case, the roles allowed are empty (""). This might imply that access to the "/api/\*\*" paths is open to all users or that no specific roles are required for authorization.
* `SECURITY_OAUTH2_BASE_SERVER_URL`: This setting specifies the base URL of the OpenID server, which is used for authentication and authorization.
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_ID`: The task management service account is utilized to facilitate process initiation, enable the use of the task management plugin (requiring the `FLOWX_ROLE` and role mapper), and access data from Keycloak.
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET`: Along with the client ID, you must also specify the client secret associated with the service account for proper authentication.
More details about the necessary service account, here:
### FlowX Engine datasource configuration
The service needs to retrieve the data for a process instance from the engine database. So it needs to have all the correct information to connect to the engine database.
The following configuration details need to be added in configuration files or overwritten using environment variables:
* `SPRING_DATASOURCE_URL`
* `SPRING_DATASOURCE_USERNAME`
* `SPRING_DATASOURCE_PASSWORD`
### MongoDB configuration
The only thing that needs to be configured is the DB access info, the rest will be handled by the plugin.
* `SPRING_DATA_MONGODB_URI` - the uri for the mongodb database
### Redis configuration
The following values should be set with the corresponding Redis related values.
* `SPRING_REDIS_HOST`
* `SPRING_REDIS_PASSWORD`
* `REDIS_TTL`
## Kafka configuration
The following Kafka related configurations can be set by using environment variables:
* `SPRING_KAFKA_BOOTSTRAP_SERVERS` - address of the Kafka server
* `SPRING_KAFKA_CONSUMER_GROUP_ID` - group of consumers
* `KAFKA_CONSUMER_THREADS` - the number of Kafka consumer threads
* `KAFKA_CONSUMER_EXCLUDE_USERS_THREADS` -
* `KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL` - the interval between retries after `AuthorizationException` is thrown by `KafkaConsumer`
* `KAFKA_MESSAGE_MAX_BYTES` - this is the largest size of the message that can be received by the broker from a producer.
### Kafka topics
Each action available in the service corresponds to a Kafka event. A separate Kafka topic must be configured for each use-case:
* `KAFKA_TOPIC_PROCESS_START_OUT` - is used for running hooks, the engine receives a start process request for a hook on this topic, and it needs to be matched with the corresponding `...start_in` topic on the engine side
* `KAFKA_TOPIC_PROCESS_OPERATIONS_OUT`- is used to update the engine on task manager operations such as assignment, unassignment, hold, unhold and terminate it is matched with the `...operations_in` topic on the engine side
* `KAFKA_TOPIC_PROCESS_SCHEDULE_IN`- is used to receive a message from the task manager when it's time to run a hook (for hooks configured with SLA, for more details on how to configure a hook with SLA, click [here](/4.0/docs/platform-deep-dive/plugins/custom-plugins/task-management/using-hooks))
* `KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_SET`- sends a message to the scheduler to set hooks or exclude users from automatic assignment when they are assigned to [out of office feature](/4.0/docs/platform-deep-dive/plugins/custom-plugins/task-management/using-out-of-office-records), it needs to be matched with the configuration in the scheduler
* `KAFKA_TOPIC_PROCESS_SCHEDULE_OUT_STOP`- ends a message to the scheduler to stop the schedule for the above actions. It needs to be matched with the configuration in the scheduler
* `KAFKA_TOPIC_EXCLUDE_USERS_SCHEDULE_IN`- is used to receive a message from the scheduler when users need to be excluded
* `KAFKA_TOPIC_TASK_IN`- used to receive a message from the engine to start a new task. It needs to be matched with the corresponding task\_out topic on the engine side.
* `KAFKA_TOPIC_EVENTS_GATEWAY_OUT_MESSAGE` - outgoing messages for Events Gateway
The Engine is listening for messages on topics with names of a certain pattern, make sure to use correct outgoing topic names when configuring the notifications plugin.
### Logging
The following environment variables could be set in order to control log levels:
* `LOGGING_LEVEL_ROOT` - root spring boot microservice logs
* `LOGGING_LEVEL_APP` - app level logs
* `LOGGING_LEVEL_MONGO_DRIVER` - MongoDB driver logs
### Filtering
* `FLOWX_ALLOW_USERNAME_SEARCH_PARTIAL` - filter possible assignees by partial names (default: true)
# Runtime manager setup
This guide provides a step-by-step process for setting up and configuring the Runtime Manager module, including database, Kafka, and OAuth2 authentication settings to manage runtime and build configurations.
## Infrastructure prerequisites
The Runtime Manager service requires the following components to be set up before it can be started:
* **PostgreSQL** - version 13 or higher for managing application data
* **MongoDB** - version 4.4 or higher for managing runtime data
* **Redis** - version 6.0 or higher (if required)
* **Kafka** - version 2.8 or higher for event-driven communication between services
* **OAuth2 Authentication** - Ensure a compatible OAuth2 authorization server is configured.
Note: The same container image and Helm chart are used for both [**Application Manager**](./runtime-manager) and Runtime Manager. Be sure to review the [**deployment guidelines**](../../release-notes/v4.5.1-november-2024/deployment-guidelines-v4.5.1#component-versions) in the release notes to verify compatibility and check the correct version.
## Dependencies
* [**Database configuration**](#database-configuration)
* [**Kafka configuration**](#configuring-kafka)
* [**Authentication & access roles**](#configuring-authentication-and-access-roles)
* [**Logging**](./setup-guides-overview#logging)
## Configuration
### General environment variables
The following environment variables provide essential configurations:
* `LOGGING_CONFIG_FILE` - Path to the logging configuration file for customized logging levels.
* `SPRING_APPLICATION_NAME` - Sets the application name.
* **Default Value:** `application-manager` -> must be changed to `runtime-manager`.
### Database configuration
The Runtime Manager uses the same PostgreSQL (to store application data) and MongoDB (to manage runtime data) as [**application-manager**](application-manager). Configure these database connections with the following environment variables:
#### PostgreSQL (Application data)
* `SPRING_DATASOURCE_URL` - Database URL for the PostgreSQL data source (same as the one configured in `application-manager` setup)
#### MongoDB (Runtime data)
* `SPRING_DATA_MONGODB_URI` - URI for connecting to MongoDB for runtime data (same as the one configured in `application-manager` setup)
* Format: `mongodb://${DB_USERNAME}:${DB_PASSWORD}@,,:/?retryWrites=false`
* `DB_USERNAME`: `app-runtime`
### Configuring Kafka
Kafka is used for event-driven operations within the Runtime Handler. Set up the Kafka configuration using the following environment variables:
* `SPRING_KAFKA_BOOTSTRAP_SERVERS` - Address of the Kafka server in the format `host:port`
* `KAFKA_TOPIC_NAMING_ENVIRONMENT` - Environment-specific suffix for Kafka topics
#### Kafka OAuth Authentication
To securely integrate with Kafka, configure the following OAuth credentials:
* `KAFKA_OAUTH_CLIENT_ID` - OAuth Client ID for Kafka
* `KAFKA_OAUTH_CLIENT_SECRET` - OAuth Client Secret for Kafka
* `KAFKA_OAUTH_TOKEN_ENDPOINT_URI` - OAuth Token Endpoint URI for obtaining Kafka tokens
* **Format:** `https:///auth/realms//protocol/openid-connect/token`
### Configuring authentication and access roles
Runtime Handler uses OAuth2 for secure access control. Set up the OAuth2 configurations with the following environment variables:
* `SECURITY_OAUTH2_BASE_SERVER_URL` - URL for the OAuth2 authorization server
* `SECURITY_OAUTH2_REALM` - Realm for OAuth2 authentication
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID` - Client ID for the Runtime Handler’s OAuth2 client
### Redis configuration (optional)
If Redis is required for caching, set the following environment variable:
* `SPRING_REDIS_HOST` - Hostname or IP address of the Redis server
### Configuring file storage
For file storage needs, configure the S3-compatible storage with the following environment variable:
* `APPLICATION_FILE_STORAGE_S3_SERVER_URL` - URL of the S3-compatible storage server for storing application files.
### Ingress configuration
For exposing the Runtime manager service, configure public, admin and adminInstances ingress settings:
```yaml
ingress:
enabled: true
public:
enabled: true
hostname: "{{ .Values.flowx.ingress.public }}"
path: /rtm/api/runtime(/|$)(.*)
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /api/runtime/$2
nginx.ingress.kubernetes.io/cors-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,platform,Flowx-Platform
admin:
enabled: true
hostname: "{{ .Values.flowx.ingress.admin }}"
path: /rtm/api/build-mgmt(/|$)(.*)
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /api/build-mgmt/$2
nginx.ingress.kubernetes.io/cors-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,platform,Flowx-Platform
adminInstances:
enabled: true
hostname: "{{ .Values.flowx.ingress.admin }}"
path: /rtm/api/runtime(/|$)(.*)
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /api/runtime/$2
nginx.ingress.kubernetes.io/cors-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,platform,Flowx-Platform
```
> **Note:** Replace placeholders in environment variables with the appropriate values for your environment before starting the service.
# Scheduler setup
This guide will walk you through the process of setting up the Scheduler service.
## Infrastructure prerequisites
* **MongoDB**: Version 4.4 or higher for storing taxonomies and contents.
* **Kafka**: Version 2.8 or higher.
* **OpenID Connect Settings**: Default settings are for Keycloak.
## Dependencies
* [MongoDB](https://www.mongodb.com/2) database
* Ability to connect to a Kafka instance used by the FlowX Engine
* Scheduler service account - required for using Start Timer event node - see [**here**](./access-management/configuring-an-iam-solution#scheduler-service-account)
The service comes with most of the required configuration properties pre-filled. However, certain custom environment variables need to be set up.
## Dependencies
### MongoDB helm example
Basic MongoDB configuration - helm values.yaml
```yaml
scheduler-mdb:
existingSecret: {{secretName}}
mongodbDatabase: {{SchedulerDatabaseName}}
mongodbUsername: {{SchedulerDatabaseUser}}
persistence:
enabled: true
mountPath: /bitnami/mongodb
size: 4Gi
replicaSet:
enabled: true
name: rs0
pdb:
enabled: trues
minAvailable:
arbiter: 1
secondary: 1
replicas:
arbiter: 1
secondary: 1
useHostnames: true
serviceAccount:
create: false
usePassword: true
```
This service needs to connect to a Mongo database that has replicas, in order to work correctly.
## Scheduler configuration
### Scheduler
```yaml
scheduler:
thread-count: 30 # Configure the number of threads to be used for sending expired messages.
callbacks-thread-count: 60 # Configure the number of threads for handling Kafka responses, whether the message was successfully sent or not
cronExpression: "*/10 * * * * *" #every 10 seconds
retry: # new retry mechanism
max-attempts: 3
seconds: 1
thread-count: 3
cronExpression: "*/10 * * * * *" #every 10 seconds
cleanup:
cronExpression: "*/25 * * * * *" #every 25 seconds
```
* `SCHEDULER_THREAD_COUNT`: Used to configure the number of threads to be used for sending expired.
* `SCHEDULER_CALLBACKS_THREAD_COUNT`: Used to configure the number of threads for handling Kafka responses, whether the message was successfully sent or not.
The "scheduler.cleanup.cronExpression" is valid for both scheduler and timer event scheduler.
#### Retry mechanism
* `SCHEDULER_RETRY_THREAD_COUNT`: Specify the number of threads to use for resending messages that need to be retried.
* `SCHEDULER_RETRY_MAX_ATTEMPTS`: This configuration parameter sets the number of retry attempts. For instance, if it's set to 3, it means that the system will make a maximum of three retry attempts for message resending.
* `SCHEDULER_RETRY_SECONDS`: This configuration parameter defines the time interval, in seconds, for retry attempts. For example, when set to 1, it indicates that the system will retry the operation after a one-second delay.
#### Cleanup
* `SCHEDULER_CLEANUP_CRONEXPRESSION`: It specifies how often, in seconds, events that have already been processed should be cleaned up from the database.
#### Recovery mechanism
```yaml
flowx:
timer-calculator:
delay-max-repetitions: 1000000
```
You have a "next execution" set for 10:25, and the cycle step is 10 minutes. If the instance goes down for 2 hours, the next execution time should be 12:25, not 10:35. To calculate this, you add 10 minutes repeatedly to 10:25 until you reach the current time. So, it would be 10:25 + 10 min + 10 min + 10 min, until you reach the current time of 12:25. This ensures that the next execution time is adjusted correctly after the downtime.
* `FLOWX_TIMER_CALCULATOR_DELAY_MAX_REPETITIONS`: This means that, for example, if our cycle step is set to one second and the system experiences a downtime of two weeks, which is equivalent to 1,209,600 seconds, and we have the "max repetitions" set to 1,000,000, it will attempt to calculate the next schedule. However, when it reaches the maximum repetitions, an exception is thrown, making it impossible to calculate the next schedule. As a result, the entry remains locked and needs to be rescheduled. This scenario represents a critical case where the system experiences extended downtime, and the cycle step is very short (e.g., 1 second), leading to the inability to determine the next scheduled event.
### Timer event scheduler
Configuration for Timer Event scheduler designed to manage timer events. Similar configuration to scheduler.
```yaml
timer-event-scheduler:
thread-count: 30
callbacks-thread-count: 60
cronExpression: "*/1 * * * * *" #every 1 seconds
retry:
max-attempts: 3
seconds: 1
thread-count: 3
cronExpression: "*/5 * * * * *" #every 5 seconds
```
## OpenID connect settings
Default settings are for Keycloak.
* `SECURITY_TYPE`: Indicates that OAuth 2.0 is the chosen security type, default value: `oauth2`.
* `SECURITY_PATHAUTHORIZATIONS_0_PATH`: Defines a security path or endpoint pattern. It specifies that the security settings apply to all paths under the "/api/" path. The `**` is a wildcard that means it includes all subpaths under "/api/\*\*".
* `SECURITY_PATHAUTHORIZATIONS_0_ROLESALLOWED`: Specifies the roles allowed for accessing the specified path. In this case, the roles allowed are empty (""). This might imply that access to the "/api/\*\*" paths is open to all users or that no specific roles are required for authorization.
```yaml example
security:
type: oauth2
pathAuthorizations:
- path: "/api/**"
rolesAllowed: "ANY_AUTHENTICATED_USER"
```
* `SECURITY_OAUTH2_BASE_SERVER_URL`: This setting specifies the base URL of the OpenID server, which is used for authentication and authorization.
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_ID`: This setting specifies the service account that is essential for enabling the [**Start Timer**](../docs/building-blocks/node/timer-events/timer-start-event) event node. Ensure that you provide the correct client ID for this service account.
* `SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET`: Along with the client ID, you must also specify the client secret associated with the service account for proper authentication.
More details about the necessary service account, here:
[Scheduler service account](./access-management/configuring-an-iam-solution#scheduler-service-account)
```yaml example
oauth2:
base-server-url: http://localhost:8080/auth
realm: flowx
client:
access-token-uri: ${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/token
client-id: flowx-platform-authorize
client-secret: wrongsecret
resource:
user-info-uri: ${security.oauth2.base-server-url}/realms/${security.oauth2.realm}/protocol/openid-connect/userinfo
service-account:
admin:
client-id: flowx-scheduler-core-sa
client-secret: wrongsecret
```
## Configuring datasoruce (MongoDB)
The MongoDB database is used to persist scheduled messages until they are sent back. The following configurations need to be set using environment variables:
* `SPRING_DATA_MONGODB_URI`: The URI for the MongoDB database.
## Configuring Kafka
The following Kafka related configurations can be set by using environment variables:
* `SPRING_KAFKA_BOOTSTRAP_SERVERS`: Address of the Kafka server
* `SPRING_KAFKA_CONSUMER_GROUP_ID`: Group of consumers
* `KAFKA_CONSUMER_THREADS` (default: 1): The number of Kafka consumer threads
* `KAFKA_CONSUMER_SCHEDULED_TIMER_EVENTS_THREADS` (default: 1): The number of Kafka consumer threads related to starting Timer Events
* `KAFKA_CONSUMER_SCHEDULED_TIMER_EVENTS_GROUP_ID`: Group of consumers related to starting timer events
* `KAFKA_CONSUMER_STOP_SCHEDULED_TIMER_EVENTS_THREADS`- (default: 1): The number of Kafka consumer threads related to stopping Timer events
* `KAFKA_CONSUMER_STOP_SCHEDULED_TIMER_EVENTS_GROUP_ID`: Group of consumers related to stopping timer events
* `KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL`: The interval between retries after `AuthorizationException` is thrown by `KafkaConsumer`
* `KAFKA_TOPIC_SCHEDULE_IN_SET`: Receives scheduled message setting requests from the Admin and Process engine microservices
* `KAFKA_TOPIC_SCHEDULER_IN_STOP`: Handles requests from the Admin and Process engine microservices to terminate scheduled messages.
* `KAFKA_TOPIC_SCHEDULED_TIMER_EVENTS_IN_SET`: Needed to use Timer Events nodes
* `KAFKA_TOPIC_SCHEDULED_TIMER_EVENTS_IN_STOP`: Needed to use Timer Events nodes
Each action available in the service corresponds to a Kafka event. A separate Kafka topic must be configured for each use-case.
Make sure the topics configured for this service don't follow the engine pattern.
## Configuring logging
The following environment variables could be set in order to control log levels:
* `LOGGING_LEVEL_ROOT: INFO` (Default): Root spring boot microservice logs
* `LOGGING_LEVEL_APP`: App level logs
# FlowX Data Search setup
This guide will walk you through the process of setting up the Data Search service using a Docker image.
## Infrastructure prerequisites
Before proceeding, ensure the following components are set up:
* **Redis** - version 6.0 or higher
* **Kafka** - version 2.8 or higher
* **Elasticsearch** - version 7.11.0 or higher
## Dependencies
* **Kafka**: Used for communication with the engine
* **Elasticsearch**: Used for indexing and searching data
* **Redis**: Used for caching
## Configuration
### Kafka Configuration
Set the following Kafka-related configurations using environment variables:
* `SPRING_KAFKA_BOOTSTRAP_SERVERS`: Address of the Kafka server
* `KAFKA_TOPIC_DATA_SEARCH_IN`: The Kafka topic for the search service requests to the engine
* `KAFKA_TOPIC_DATA_SEARCH_OUT`: Where the engine awaits for the response
* `KAFKA_CONSUMER_THREADS`: Number of Kafka consumer threads
### Elasticsearch configuration
Set the following Elasticsearch-related configurations using environment variables:
* `SPRING_ELASTICSEARCH_REST_URIS`
* `SPRING_ELASTICSEARCH_REST_PROTOCOL`
* `SPRING_ELASTICSEARCH_REST_DISABLESSL`
* `SPRING_ELASTICSEARCH_REST_USERNAME`
* `SPRING_ELASTICSEARCH_REST_PASSWORD`
* `SPRING_ELASTICSEARCH_INDEX_SETTINGS_NAME` (default: `process_instance`)
```yaml
spring:
elasticsearch:
rest:
protocol: https
uris: localhost:9200
disableSsl: false
username: ""
password: ""
index-settings:
name: process_instance
```
### Authorization & Access Roles Configuration
Set the following environment variables to connect to the identity management platform:
* `SECURITY_OAUTH2_BASE_SERVER_URL`
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID`
* `SECURITY_OAUTH2_REALM`
### Logging Configuration
Control log levels using these environment variables:
* `LOGGING_LEVEL_ROOT`: For root spring boot microservice logs
* `LOGGING_LEVEL_APP`: For app-level logs
### Elasticsearch
Data search in Elasticsearch operates against an index pattern representing multiple indices. The index pattern is derived from the configuration property `spring.elasticsearch.index-settings.name`.
Here's an example filter for use in Kibana (generated by data search):
```json
{
"query": {
"bool": {
"adjust_pure_negative": true,
"boost": 1,
"must": [
{
"nested": {
"boost": 1,
"ignore_unmapped": false,
"path": "keyIdentifiers",
"query": {
"bool": {
"adjust_pure_negative": true,
"boost": 1,
"must": [
{
"match": {
"keyIdentifiers.key.keyword": {
"auto_generate_synonyms_phrase_query": true,
"boost": 1,
"fuzzy_transpositions": true,
"lenient": false,
"max_expansions": 50,
"operator": "OR",
"prefix_length": 0,
"query": "astonishingAttribute",
"zero_terms_query": "NONE"
}
}
},
{
"match": {
"keyIdentifiers.originalValue.keyword": {
"auto_generate_synonyms_phrase_query": true,
"boost": 1,
"fuzzy_transpositions": true,
"lenient": false,
"max_expansions": 50,
"operator": "OR",
"prefix_length": 0,
"query": "OriginalGangsta",
"zero_terms_query": "NONE"
}
}
}
]
}
},
"score_mode": "none"
}
},
{
"terms": {
"boost": 1,
"processDefinitionName.keyword": [
"TEST_PORCESS_NAME_0",
"TEST_PORCESS_NAME_1"
]
}
}
]
}
}
}
```
Kibana is an open-source data visualization and exploration tool designed primarily for Elasticsearch. It serves as the visualization layer for the Elastic Stack, allowing users to interact with their data stored in Elasticsearch to perform various activities such as querying, analyzing, and visualizing data.
For more information about Kibana and its capabilities, visit the [**Kibana official documentation**](https://www.elastic.co/guide/en/kibana/current/index.html). This resource provides in-depth guidance, tutorials, and documentation on how to use Kibana effectively for data visualization, analysis, and dashboard creation.
# Setup guides
Deploying microservices involves breaking down an application into smaller, modular components that can be independently deployed. Each microservice should include all necessary dependencies and configurations to ensure smooth and reliable operation.
Microservices can be deployed using container management systems like Docker or Kubernetes, which allow for the deployment of multiple microservices within a unified environment.
To achieve a smooth and successful deployment of your microservices, it's crucial to follow the recommended installation order. Proper sequencing avoids dependency issues and ensures that each service operates correctly within the overall system architecture.
**Recommended Installation Order for Microservices**:
Deploy the following backend services simultaneously, as they are foundational and interdependent:
* admin
* audit-core
* task-management
* scheduler-core
* data-search
* license-core
* events-gateway
* others (any additional microservices)
Finally, deploy the Frontend service. This should be done after the backend is fully operational, although it can be deployed alongside the backend if it won’t be accessed from the browser immediately (note that the platform status might crash or be incomplete while the backend is still starting up).
Following this sequence ensures that the core components are established before dependent services start. This approach minimizes the risk of initialization errors and incomplete system states. Prioritizing backend services is essential, as they provide the foundation for other components.
## Environment variables
Environment variables are variables that are set in the system environment and can be used by applications and services to store and access configuration information. Environment variables typically include settings such as paths to directories, file locations, settings for the operating system and applications, and more.
Environment variables are used to store and access configuration information in a secure and efficient manner. Below you will find some examples of common/shared environment variables that need to be set for different services and components.
## Authorization & access roles
An identity management platform is a software system that helps you manage authorization & access roles, including user accounts, passwords, access control, and authentication. Identity management platforms typically offer features such as user provisioning, identity federation, and single sign-on.
The following variables need to be set in order to connect to the identity management platform:
* `SECURITY_OAUTH2_BASE_SERVER_URL` - the base URL for the OAuth 2.0 Authorization Server, which is responsible for authentication and authorization for clients and users, it is used to authorize clients, as well as to issue and validate access tokens
* `SECURITY_OAUTH2_CLIENT_CLIENT_ID` - a unique identifier for a client application that is registered with the OAuth 2.0 Authorization Server, this is used to authenticate the client application when it attempts to access resources on behalf of a user
* `SECURITY_OAUTH2_CLIENT_CLIENT_SECRET` - secret key that is used to authenticate requests made by an authorization client
* `SECURITY_OAUTH2_REALM` - security configuration env var in the Spring Security OAuth2 framework, it is used to specify the realm name used when authenticating with OAuth2 providers
## Datasource configuration
Datasource configuration is the process of configuring a data source, such as a database, file, or web service, so that an application can connect to it and use the data. This typically involves setting up the connection parameters, such as the host, port, username, and password.
In some cases, additional configuration settings may be required, such as specifying the type of data source (e.g. Oracle, MySQL, etc.) or setting up access control for data access.
Environment variables are more secure than hard-coding credentials in the application code and make it easier to update data source parameters without having to modify the application code.
Some microservices ([**Admin**](./admin-setup-guide) microservice, for example, connects to the same Postgres / Oracle database as the [**FlowX Engine**](./flowx-engine-setup-guide/engine-setup)).
Depending on the data source type, various parameters may need to be configured. For example, if connecting to an Oracle database, the driver class name, and the database schema must be provided. For MongoDB, the URI is needed.
The following variables need to be set in order to set the datasource:
* `SPRING_DATASOURCE_URL` - environment variable used to configure a data source URL for a Spring application, it typically contains the JDBC driver name, the server name, port number, and database name
* `SPRING_DATASOURCE_USERNAME` - environment variable used to set the username for the database connection, this can be used to connect to a database instance
* `SPRING_DATASOURCE_PASSWORD` - environment variable used to store the password for the database connection, this can be used to secure access to the database and ensure that only authorized users have access to the data
* `SPRING_DATASOURCE_DRIVERCLASSNAME` (❗️only for Oracle DBs) - environment variable used to set the class name of the JDBC driver that the Spring DataSource will use to connect to the database
* `SPRING_JPA_PROPERTIES_HIBERNATE_DEFAULTSCHEMA` (❗️only for Oracle DBs) - environment variable used to overwrite the name of the database schema
* `SPRING_DATA_MONGODB_URI` (❗️only for MongoDB) - environment variable used to provide the connection string for a MongoDB database that is used with, this connection string provides the host, port, database name, user credentials, and other configuration details for the MongoDB server
You will need to make sure that the user, password, connection link and db name are configured correctly, otherwise, you will receive errors at start time.
The datasource is configured automatically via a liquibase script inside the engine. All updates will include migration scripts.
## Kafka
The following Kafka-related configurations can be set by using environment variables:
* `SPRING_KAFKA_BOOTSTRAP_SERVERS` - environment variable used to configure the list of brokers to which the kafka client will connect, this is a comma-separated list of host and port pairs that are the addresses of the Apache Kafka brokers in a Kafka cluster
* `SPRING_KAFKA_CONSUMER_GROUP_ID` - environment variable is used to set the consumer group ID for the Kafka consumer, it is used to identify which consumer group the consumer belongs to and allows the Kafka broker to manage which messages are consumed by each consumer in the group
* `SPRING_KAFKA_CONSUMER_GROUP_ID` - might be different for the services that have the group id separated in topics, also thread numbers.
* `KAFKA_CONSUMER_THREADS` - environment variable used to control the number of threads that a Kafka consumer instance can use to consume messages from a cluster, it defines the number of threads that the consumer instance should use to poll for messages from the Kafka cluster
* `KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL` - environment variable used to set the interval at which Kafka clients should retry authentication exceptions (the interval between retries after AuthorizationException is thrown by KafkaConsumer)
* `KAFKA_MESSAGE_MAX_BYTES` - this is the largest size of the message that can be received by the broker from a producer.
Each action available in the service corresponds to a Kafka event. A separate Kafka topic must be configured for each use case.
FlowX Engine is listening for messages on topics with names of a certain pattern, make sure to use correct outgoing topic names when configuring the services.
## Redis configuration
Redis configuration involves setting up the connection parameters, such as the host, port, username, and password. In some cases, additional configuration settings may be required, such as specifying the type of data store or setting up access control for data access.
* `SPRING_REDIS_HOST` - environment variable used to configure the hostname or IP address of a Redis server when [](https://docs.camunda.io/docs/components/concepts/workflow-patterns/)using Spring Data Redis
* `SPRING_REDIS_PASSWORD` - environment variable is used to store the password used to authenticate with a Redis server, it is used to secure access to the Redis server and should be kept confidential
* `REDIS_TTL` - environment variable is used to specify the maximum time-to-live (TTL) for a key in Redis, it is used to set a limit on how long a key can exist before it is automatically expired (Redis will delete the key after the specified TTL has expired)
## Debugging
Advanced debugging features can be enabled. When this happens, snapshots of the process status will be taken after each action and can be later used for debugging purposes. This feature comes with an exponential increase in database usage, so we suggest having the flag set to true on debugging media and false production ones.
## Logging
The following environment variables could be set in order to control log levels:
* `LOGGING_LEVEL_ROOT` - root spring boot microservice logs
* `LOGGING_LEVEL_APP` - controls the verbosity of the application's logs and how much information is recorded (app level logs)
## Tracing via Jaeger
Jaeger tracing has been removed starting with the FlowX.AI v4.1 release.
Tracing via Jaeger involves collecting timing data from the components in a distributed application. This allows you to better identify bottlenecks and latency issues.
The following FlowX.AI services use Jaeger tracing:
* scheduler-core
* customer-management-plugin
* document-plugin
* notification-plugin
* process-engine
## License model
A license model is a set of rules and regulations governing how software can be used, distributed, and modified. It also outlines the rights and responsibilities of the software user and the software developer. Common license models include open source, freeware, shareware, and commercial software.
Most of the third-party components used by FlowX are under [**Apache License 2.0**](https://www.apache.org/licenses/LICENSE-2.0) source code.
## Third-party components
Third-party components are software components or libraries that are not part of FLOWX.AI but are instead created by another company or individual and used in a development project.
These components can range from databases and operating systems to user interface components and libraries that provide support for a specific feature or task.
Third party components are components such as libraries, frameworks, APIs, etc.
# FlowX CMS access rights
Granular access rights can be configured for restricting access to the CMS component.
Four different access authorizations are provided, each with specified access scopes:
1. **Manage-contents** - for configuring access for manipulating CMS contents
Available scopes:
* import - users are able to import enumeration/substitution tags
* read - users are able to show enumeration/substitution tags, export enumeration/substitution tags
* edit - users are able to create/edit enumeration/substitution tags
* admin - users are able to delete enumeration/substitution tags
2. **Manage-taxonomies** - for configuring access for manipulating taxonomies
Available scopes:
* read - users are able to show languages/source systems
* edit - users are able to edit languages/source systems
* admin - users are able to delete languages/source systems
3. **Manage-media-library** - for configuring access rights to use Media Library
Available scopes:
* import - users are able to import assets
* read - users are able to view assets
* edit - users are able to edit assets
* admin - users are able to delete assets
4. **Manage-themes** - for configuring access rights to use themes, fonts and designer assets
Available scopes:
* import - users are able to import fonts
* read - users are able to view fonts
* edit - users are able to edit fonts
* admin - users are able to delete fonts
The CMS service is preconfigured with the following default users roles for each of the access scopes mentioned above:
* **manage-contents**
* import:
* ROLE\_CMS\_CONTENT\_IMPORT
* ROLE\_CMS\_CONTENT\_EDIT
* ROLE\_CMS\_CONTENT\_ADMIN
* read:
* ROLE\_CMS\_CONTENT\_EDIT
* ROLE\_CMS\_CONTENT\_ADMIN
* ROLE\_CMS\_CONTENT\_READ
* ROLE\_CMS\_CONTENT\_IMPORT
* edit:
* ROLE\_CMS\_CONTENT\_EDIT
* ROLE\_CMS\_CONTENT\_ADMIN
* admin:
* ROLE\_CMS\_CONTENT\_ADMIN
* **manage-taxonomies**
* import:
* ROLE\_CMS\_TAXONOMIES\_IMPORT
* ROLE\_CMS\_TAXONOMIES\_EDIT
* ROLE\_CMS\_TAXONOMIES\_ADMIN
* read:
* ROLE\_CMS\_TAXONOMIES\_READ
* ROLE\_CMS\_TAXONOMIES\_IMPORT
* ROLE\_CMS\_TAXONOMIES\_EDIT
* ROLE\_CMS\_TAXONOMIES\_ADMIN
* edit:
* ROLE\_CMS\_TAXONOMIES\_EDIT
* ROLE\_CMS\_TAXONOMIES\_ADMIN
* admin:
* ROLE\_CMS\_TAXONOMIES\_ADMIN
* **manage-media-library**
* import:
* ROLE\_MEDIA\_LIBRARY\_IMPORT
* ROLE\_MEDIA\_LIBRARY\_EDIT
* ROLE\_MEDIA\_LIBRARY\_ADMIN
* read:
* ROLE\_MEDIA\_LIBRARY\_READ
* ROLE\_MEDIA\_LIBRARY\_EDIT
* ROLE\_MEDIA\_LIBRARY\_ADMIN
* ROLE\_MEDIA\_LIBRARY\_IMPORT
* edit:
* ROLE\_MEDIA\_LIBRARY\_EDIT
* ROLE\_MEDIA\_LIBRARY\_ADMIN
* admin:
* ROLE\_MEDIA\_LIBRARY\_ADMIN
* **manage-themes**
* import:
* ROLE\_THEMES\_IMPORT
* ROLE\_THEMES\_EDIT
* ROLE\_THEMES\_ADMIN
* read:
* ROLE\_THEMES\_READ
* ROLE\_THEMES\_EDIT
* ROLE\_THEMES\_ADMIN
* ROLE\_THEMES\_IMPORT
* edit:
* ROLE\_THEMES\_EDIT
* ROLE\_THEMES\_ADMIN
* admin:
* ROLE\_THEMES\_ADMIN
The needed roles should be defined in the chosen identity provider solution.
In case other custom roles are needed, you can configure them using environment variables. More than one role can be set for each access scope.
To configure access for each of the roles above, adapt the following input:
`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED: NEEDED_ROLE_NAMES`
Possible values for `AUTHORIZATIONNAME`: `MANAGECONTENTS`, `MANAGETAXONOMIES`.
Possible values for `SCOPENAME`: import, read, edit, admin.
For example, if you need to configure role access for import, insert this:
```
SECURITY_ACCESSAUTHORIZATIONS_MANAGECONTENTS_SCOPES_IMPORT_ROLESALLOWED: ROLE_CMS_CONTENT_IMPORT
```
# FlowX Engine access rights
Granular access rights can be configured for restricting access to the Engine component.
Two different access authorizations are provided, each with specified access scopes:
1. **Manage-processes** - for configuring access for running test processes
Available scopes:
* **edit** - users are able to start processes for testing and to test action rules
2. **Manage-instances** - for configuring access for manipulating process instances
Available scopes:
* **read** - users can view the list of process instances
* **admin** - users are able to retry an action on a process instance token
The Engine service is preconfigured with the following default users roles for each of the access scopes mentioned above:
* **manage-processes**
* edit:
* ROLE\_ADMIN\_MANAGE\_PROCESS\_EDIT
* ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN
* admin:
* ROLE\_ADMIN\_MANAGE\_PROCESS\_ADMIN
* **manage-instances**
* read:
* ROLE\_ENGINE\_MANAGE\_INSTANCE\_READ
* ROLE\_ENGINE\_MANAGE\_INSTANCE\_ADMIN
* admin:
* ROLE\_ENGINE\_MANAGE\_INSTANCE\_ADMIN
These roles need to be defined in the chosen identity provider solution.
In case other custom roles are needed, you can configure them using environment variables. More than one role can be set for each access scope.
To configure access for each of the roles above, adapt the following input:
`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED:NEEDED_ROLE_NAMES`
Possible values for AUTHORIZATIONNAME: MANAGEPROCESSES, MANAGEINSTANCES.
Possible values for SCOPENAME: read, edit, admin.
For example, if you need to configure role access for read, insert this:
```
SECURITY_ACCESSAUTHORIZATIONS_MANAGEINSTANCES_SCOPES_READ_ROLESALLOWED: ROLE_NAME_TEST
```
# License Engine access rights
Granular access rights can be configured for restricting access to the License component.
The following access authorizations are provided, with the specified access scopes:
1. **Manage-licenses** - for configuring access for managing license related details
Available scopes:
* read - users are able to view the license report
* edit - users are able to update the license model and sync license data
* admin - users are able to download the license data
The License component is preconfigured with the following default users roles for each of the access scopes mentioned above:
* manage-licenses
* read:
* ROLE\_LICENSE\_MANAGE\_READ
* ROLE\_LICENSE\_MANAGE\_EDIT
* ROLE\_LICENSE\_MANAGE\_ADMIN
* edit:
* ROLE\_LICENSE\_MANAGE\_EDIT
* ROLE\_LICENSE\_MANAGE\_ADMIN
* admin:
* ROLE\_LICENSE\_MANAGE\_ADMIN
These roles need to be defined in the chosen identity provider solution.
In case other custom roles are needed, you can configure them using environment variables. More than one role can be set for each access scope.
To configure access for each of the roles above, adapt the following input:
`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED: NEEDED_ROLE_NAMES`
Possible values for `AUTHORIZATIONNAME: MANAGELICENSES`.
Possible values for `SCOPENAME`: read, edit, admin.
For example, if you need to configure role access for read, insert this:
```
SECURITY_ACCESSAUTHORIZATIONS_MANAGELICENSES_SCOPES_READ_ROLESALLOWED: ROLE_NAME_TEST
```
# Documents plugin access rights
Granular access rights can be configured for restricting access to the Documents plugin component.
The following access authorizations is provided, with the specified access scopes:
1. **Manage-document-templates** - for configuring access for managing document templates
Available scopes:
* import - users are able to import document templates
* read - users are able to view document templates
* edit - users are able to edit document templates
* admin - users are able to publish or delete document templates
The Document plugin is preconfigured with the following default users roles for each of the access scopes mentioned above:
* manage-document-templates
* import:
* ROLE\_DOCUMENT\_TEMPLATES\_IMPORT
* ROLE\_DOCUMENT\_TEMPLATES\_EDIT
* ROLE\_DOCUMENT\_TEMPLATES\_ADMIN
* read
* ROLE\_DOCUMENT\_TEMPLATES\_READ
* ROLE\_DOCUMENT\_TEMPLATES\_IMPORT
* ROLE\_DOCUMENT\_TEMPLATES\_EDIT
* ROLE\_DOCUMENT\_TEMPLATES\_ADMIN
* edit:
* ROLE\_DOCUMENT\_TEMPLATES\_EDIT
* ROLE\_DOCUMENT\_TEMPLATES\_ADMIN
* admin:
* ROLE\_DOCUMENT\_TEMPLATES\_ADMIN
These roles need to be defined in the chosen identity provider solution.
In case other custom roles are needed, you can configure them using environment variables. More than one role can be set for each access scope.
To configure access for each of the roles above, adapt the following input:
**`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED: NEEDED_ROLE_NAMES`**
Possible values for AUTHORIZATIONNAME: MANAGEDOCUMENTTEMPLATES.
Possible values for SCOPENAME: import, read, edit, admin.
For example, if you need to configure role access for read, insert this:
```
SECURITY_ACCESSAUTHORIZATIONS_MANAGEDOCUMENTTEMPLATES_SCOPES_READ_ROLESALLOWED: ROLE_NAME_TEST
```
# Notifications plugin access rights
Granular access rights can be configured for restricting access to the Notification plugin component.
The following access authorizations are provided, with the specified access scopes:
1. **Manage-notification-templates** - for configuring access for managing notification templates
Available scopes:
* import - users are able to import notification templates
* read - users are able to view notification templates
* edit - users are able to edit notification templates
* admin - users are able to publish or delete notification templates
The Notification plugin is preconfigured with the following default users roles for each of the access scopes mentioned above:
* manage-notification-templates
* import
* ROLE\_NOTIFICATION\_TEMPLATES\_IMPORT
* ROLE\_NOTIFICATION\_TEMPLATES\_EDIT
* ROLE\_NOTIFICATION\_TEMPLATES\_ADMIN
* read:
* ROLE\_NOTIFICATION\_TEMPLATES\_READ
* ROLE\_NOTIFICATION\_TEMPLATES\_IMPORT
* ROLE\_NOTIFICATION\_TEMPLATES\_EDIT
* ROLE\_NOTIFICATION\_TEMPLATES\_ADMIN
* edit:
* ROLE\_NOTIFICATION\_TEMPLATES\_EDIT"
* ROLE\_NOTIFICATION\_TEMPLATES\_ADMIN"
* admin:
* ROLE\_NOTIFICATION\_TEMPLATES\_ADMIN
These roles need to be defined in the chosen identity provider solution.
In case other custom roles are needed, you can configure them using environment variables. More than one role can be set for each access scope.
To configure access for each of the roles above, adapt the following input:
**`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED: NEEDED_ROLE_NAMES`**
Possible values for AUTHORIZATIONNAME: `MANAGENOTIFICATIONTEMPLATES`.
Possible values for SCOPENAME: import, read, edit, admin.
For example, if you need to configure role access for read, insert this:
```
SECURITY_ACCESSAUTHORIZATIONS_MANAGENOTIFICATIONTEMPLATES_SCOPES_READ_ROLESALLOWED: ROLE_NAME_TEST
```
# Task management plugin access rights
Granular access rights can be configured for restricting access to the Task management plugin component.
Two different access authorizations are provided, each with specified access scopes:
1. **manage-tasks** - for configuring access for viewing the tasks lists
Available scopes:
* read - users are able to view tasks
2. **manage-hooks** - for configuring access for managing hooks
Available scopes:
* import - users are able to import hooks
* read - users are able to view hooks
* edit - users are able to edit hooks
* admin - users are able to delete hooks
3. **manage-process-allocation-settings** - for configuring access for managing process allocation settings
Available scopes:
* import - users are able to import allocation rules
* read - users are able to read/export allocation rules
* edit - users are able to edit access - create/edit allocation rules
* admin - users are able to delete allocation rules
4. **manage-out-of-office-users** - for configuring access for managing out-of-office users
Available scopes:
* read - users are able to view out-of-office records
* edit - users are able to create and edit out-of-office records
* admin - users are able to delete out-of-office records
The Task management plugin is preconfigured with the following default users roles for each of the access scopes mentioned above:
* manage-tasks
* read:
* ROLE\_TASK\_MANAGER\_TASKS\_READ
* manage-hooks
* import:
* ROLE\_TASK\_MANAGER\_HOOKS\_IMPORT
* ROLE\_TASK\_MANAGER\_HOOKS\_EDIT
* ROLE\_TASK\_MANAGER\_HOOKS\_ADMIN
* read:
* ROLE\_TASK\_MANAGER\_HOOKS\_READ
* ROLE\_TASK\_MANAGER\_HOOKS\_IMPORT
* ROLE\_TASK\_MANAGER\_HOOKS\_EDIT
* ROLE\_TASK\_MANAGER\_HOOKS\_ADMIN
* edit:
* ROLE\_TASK\_MANAGER\_HOOKS\_EDIT
* ROLE\_TASK\_MANAGER\_HOOKS\_ADMIN
* admin:
* ROLE\_TASK\_MANAGER\_HOOKS\_ADMIN
* manage-process-allocation-settings
* import:
* ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_IMPORT
* ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_EDIT
* ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_ADMIN
* read:
* ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_READ
* ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_IMPORT
* ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_EDIT
* ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_ADMIN
* edit:
* ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_EDIT
* ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_ADMIN
* admin:
* ROLE\_TASK\_MANAGER\_PROCESS\_ALLOCATION\_SETTINGS\_ADMIN
* manage-out-of-office-users
* read:
* ROLE\_TASK\_MANAGER\_OOO\_READ
* ROLE\_TASK\_MANAGER\_OOO\_EDIT
* ROLE\_TASK\_MANAGER\_OOO\_ADMIN
* edit:
* ROLE\_TASK\_MANAGER\_OOO\_EDIT
* ROLE\_TASK\_MANAGER\_OOO\_ADMIN
* admin:
* ROLE\_TASK\_MANAGER\_OOO\_ADMIN
These roles need to be defined in the chosen identity provider solution.
In case other custom roles are needed, you can configure them using environment variables. More than one role can be set for each access scope.
To configure access for each of the roles above, adapt the following input:
**`SECURITY_ACCESSAUTHORIZATIONS_AUTHORIZATIONNAME_SCOPES_SCOPENAME_ROLESALLOWED: NEEDED_ROLE_NAMES`**
Possible values for AUTHORIZATIONNAME: MANAGETASKS, MANAGEHOOKS.
Possible values for SCOPENAME: import, read, edit, admin.
For example, if you need to configure role access for read, insert this:
```
SECURITY_ACCESSAUTHORIZATIONS_MANAGEHOOKS_SCOPES_READ_ROLESALLOWED: ROLE_NAME_TEST
```