Is the Redis instance persistent?
No. Redis is only used as a caching solution and not for persistent storage. However, advancing through process flows depends on the info cached in Redis. If the cache is emptied while there are active processes, these might become blocked.
How can I generate A4 pdf documents using html templates?
You should include the following bit of css in the html template:
size: A4 portrait;
What are the adjustable elements of the signature area?
The size of the area cannot be changed, you can however, edit or format the text inside it.
How are S3 buckets organized?
The FLOWX platform can be set up to use any S3 compatible cloud storage solution for storing documents. These use buckets in order to organize files. The platform uses the following buckets:
- the main bucket prefix for the platform can be configured in the Document Plugin configuration
- the docx templates for generating documents, are stored under
- custom application related files (not specific to a certain user) are stored under
- documents related to a process instance are stored under
- the OCR plugin stores extracted signatures in the bucket named
extracted-signatures, this can be configured with the environment variable
All storage solution have space limitations that need to be taken into consideration. Any necessary file cleanup can be done via the provided REST API or by configuring the lifecycle of the bucket contents.
Here's an example of how the lifecycle configuration can be done on MinIO:
What does the notification plugin use for template management?
We use the Thymeleaf engine to define templates for all types of notifications: we use text based templates for email subjects, sms and push notifications and html templates for email bodies.
Do OTPs expire?
Yes, OTPs have a certain validity, it can be configured in the notifications plugin. The default value is set to 10 minutes.
What happens if I generate more OTPs for a certain action?
If a new OTP is generated for the same user and channel (phone number, email address), the previously generated OTP is invalidated when the new one is saved. The latest generated OTP must be used by the user to continue the flow.
Does FLOWX need a data warehouse/data lake?
FLOWX implementation is not conditioned by the existence of a data warehouse.
Connecting to raw data sources can be done as long as there are APIs that provide that data (for example an API for a CRM can access customer information or create new information about existing or new customers).
What databases does FLOWX have APIs for?
Oracle, MongoDB, PostgreSQL. Connecting directly to a database is done through Kafka connectors.
What other API services can FLOWX connect to?
FLOWX can connect to any API - SOAP, REST, etc.
How does FLOWX orchestrate the data flow?
FLOWX orchestrates the data flow by modeling business processes using BPMN concepts.
There are "send event" nodes that send events to other systems on an event bus (Kafka). These events are transformed into calls to the various APIs that are orchestrated by FLOWX.
What is the FLOWX scalability limit, regardless of the size of the available infrastructure?
There is no clear limit given by the platform, which can scale horizontally so that it can support loads of tens of thousands of users per second. Usually, the limitations are given by the systems they orchestrate or by hardware limitations.
FLOWX was created to orchestrate business processes that involve several steps, business rules between these steps, user interactions, and other systems.
What kind of UI can FLOWX generate?
The FLOWX Platform is able to generate screens based on a business flow but it can also orchestrate custom screens developed by a developer so all need can be addressed in terms of UI.
What information does FLOWX store and for how long?
FLOWX stores data during a business process. There is a routine that, depending on the configuration, deletes processes older than a certain number of days.
What kind of data structure does FLOWX have?
FLOWX does not have a predefined data structure.
How is data from a newly created field recorded in the legacy databases?
Each input has a key based on which the data is saved in the process database. At a certain point, the data is sent to a legacy system through an adapter. At that moment, the mapping is made between the key on which the information was saved and the key on which that information must reach another system.