The Data Search service is a microservice that enables data searches within other processes in an aplication or within other processes from other applications. It facilitates the creation of processes capable of conducting searches and retrieving data by utilizing Kafka actions in tandem with Elasticsearch mechanisms.
Data Search service leverages Elasticsearch to execute searches based on indexed keys, using existing mechanisms.
For our example, two process definitions are necessary:
Firstly, create a process where data will be added. Subsequently, the second process will be used to search for data in this initial process.
In the “Add Data Process Example” it’s crucial to note that we add mock data here to simulate existing data within real processes.
Example of MVEL Business Rule:
Now we can play with this process and create some process instances with different states.
Configure the “Search process” to search data in the first created process instances:
Create process
Create a process using the Process Designer.
Displaying the results (optional)
Add a Task node within the process. Configure this node and add a business rule if you want to customize the display of results, e.g:
For displaying results in the UI, you can also consider utilizing Collections UI element.
Configure the search node
Add a user task and configure a send event using a Kafka send action. Configure the following parameters:
Topic name
The Kafka topic for the search service requests (defined at KAFKA_TOPIC_DATA_SEARCH_IN
environment variable in your deployment).
Body message
Indexing this key within the process is crucial for the Data Search service to effectively locate it. To enable indexing, navigate to your desired Application then choose the process definition and access Process Settings → Data Search.
❗️ Keys are indexed automatically when the process status changes (e.g., created, started, finished, failed, terminated, expired), when swimlanes are altered, or when stages are modified. To ensure immediate indexing, select the ‘Update in Task Management’ option either in the node configuration or within Process Settings → General tab.
["STARTED", "FINISHED, ONHOLD" "..."]
- depending if you want to filter process instances based on their status, if the parameter is ommited, the process will display all the statusesCheck the Understanding the Process Status Data section for more example of possible states.
Data to send
Advanced configuration (Headers)
{"processInstanceId": ${processInstanceId}}
If you also use callbackActions, you will need to also add the following headers:
{"destinationId": "search_node", "callbacksForAction": "search_for_client"}
Example (dummy values extracted from a process):
Performing the search
A custom microservice (a core extension) will receive this event and search the value in the Elasticsearch.
Receiving the response
It will respond to the engine via a Kafka topic (defined at KAFKA_TOPIC_DATA_SEARCH_OUT
env variable in your deployment). Add the topic in the Node config of the User task where you previously added the Kafka Send Action.
The response’s body message will look like this:
Example (dummy values extracted from a process):
To access the view of your process variables, tokens and subprocesses go to FLOWX.AI Designer > Active process > Process Instances. Here you will find the response.
NOTE: Up to 50 results will be received if tooManyResults
is true.
Example with dummy values extracted from a process:
Enabling Elasticsearch indexing requires activating the configuration in the FlowX Engine. Check the indexing section for more details.
For deployment and service setup instructions
The Data Search service is a microservice that enables data searches within other processes in an aplication or within other processes from other applications. It facilitates the creation of processes capable of conducting searches and retrieving data by utilizing Kafka actions in tandem with Elasticsearch mechanisms.
Data Search service leverages Elasticsearch to execute searches based on indexed keys, using existing mechanisms.
For our example, two process definitions are necessary:
Firstly, create a process where data will be added. Subsequently, the second process will be used to search for data in this initial process.
In the “Add Data Process Example” it’s crucial to note that we add mock data here to simulate existing data within real processes.
Example of MVEL Business Rule:
Now we can play with this process and create some process instances with different states.
Configure the “Search process” to search data in the first created process instances:
Create process
Create a process using the Process Designer.
Displaying the results (optional)
Add a Task node within the process. Configure this node and add a business rule if you want to customize the display of results, e.g:
For displaying results in the UI, you can also consider utilizing Collections UI element.
Configure the search node
Add a user task and configure a send event using a Kafka send action. Configure the following parameters:
Topic name
The Kafka topic for the search service requests (defined at KAFKA_TOPIC_DATA_SEARCH_IN
environment variable in your deployment).
Body message
Indexing this key within the process is crucial for the Data Search service to effectively locate it. To enable indexing, navigate to your desired Application then choose the process definition and access Process Settings → Data Search.
❗️ Keys are indexed automatically when the process status changes (e.g., created, started, finished, failed, terminated, expired), when swimlanes are altered, or when stages are modified. To ensure immediate indexing, select the ‘Update in Task Management’ option either in the node configuration or within Process Settings → General tab.
["STARTED", "FINISHED, ONHOLD" "..."]
- depending if you want to filter process instances based on their status, if the parameter is ommited, the process will display all the statusesCheck the Understanding the Process Status Data section for more example of possible states.
Data to send
Advanced configuration (Headers)
{"processInstanceId": ${processInstanceId}}
If you also use callbackActions, you will need to also add the following headers:
{"destinationId": "search_node", "callbacksForAction": "search_for_client"}
Example (dummy values extracted from a process):
Performing the search
A custom microservice (a core extension) will receive this event and search the value in the Elasticsearch.
Receiving the response
It will respond to the engine via a Kafka topic (defined at KAFKA_TOPIC_DATA_SEARCH_OUT
env variable in your deployment). Add the topic in the Node config of the User task where you previously added the Kafka Send Action.
The response’s body message will look like this:
Example (dummy values extracted from a process):
To access the view of your process variables, tokens and subprocesses go to FLOWX.AI Designer > Active process > Process Instances. Here you will find the response.
NOTE: Up to 50 results will be received if tooManyResults
is true.
Example with dummy values extracted from a process:
Enabling Elasticsearch indexing requires activating the configuration in the FlowX Engine. Check the indexing section for more details.
For deployment and service setup instructions