Infrastructure prerequisites

Before setting up the Admin microservice, ensure the following components are properly set up:

  • Database Instance: The Admin microservice connects to the same database as the FlowX.AI Engine.

Dependencies

Ensure the following dependencies are met:

  • Database: Properly configured database instance.
  • Datasource: Configuration details for connecting to the database where also the FlowX.Ai Engine is connected.
  • Kafka cluster: If you intend to use the FlowX.AI Audit functionality, ensure that the backend microservice can connect to the Kafka cluster. When connected to Kafka, it sends details about all database transactions to a configured Kafka topic.

Datasource configuration

To store process definitions the Admin microservice connects to the same Postgres / Oracle database as the Engine. Make sure to set the needed database connection details.

The following configuration details need to be added using environment variables:

  • SPRING_DATASOURCE_URL: This environment variable is used to specify the URL of the database that the Admin microservice and Engine connect to. The URL typically includes the necessary information to connect to the database server, such as the host, port, and database name. It follows the format of the database’s JDBC URL, which is specific to the type of database being used (e.g., PostgreSQL or Oracle).
  • SPRING_DATASOURCE_USERNAME: This environment variable sets the username that the Admin microservice and Engine used to authenticate themselves when connecting to the database. The username is used to identify the user account that has access to the specified database.
  • SPRING_DATASOURCE_PASSWORD: This environment variable specifies the password associated with the username provided in the SPRING_DATASOURCE_USERNAME variable. The password is used to authenticate the user and grant access to the database.

You will need to make sure that the user, password, connection link and db name are configured correctly, otherwise, you will receive errors at start time.

The database schema is managed by a liquibase script provided with the Engine.

MongoDB configuration

The Admin microservice also connects to a MongoDB database instance for additional data management. Configure the MongoDB connection with the following environment variables:

  • SPRING_DATA_MONGODB_URI - URI for connecting to the Admin MongoDB instance
    • Format: mongodb://${DB_USERNAME}:${DB_PASSWORD}@<host1>,<host2>,<arbiter-host>:<port>/${DB_NAME}?retryWrites=false
  • DB_USERNAME: admin.
  • DB_PASSWORD: DB password.
  • DB_NAME: admin.
  • SPRING_DATA_MONGODB_STORAGE - Specifies the storage type used for the Runtime MongoDB instance (Azure environments only)
    • Possible Values: mongodb, cosmosdb
    • Default Value: mongodb

Ensure that the MongoDB configuration is compatible with the same database requirements as the FlowX.AI Engine, especially if sharing database instances.

Kafka configuration

The Admin microservice uses Kafka for sending audit logs, managing scheduled timer events, platform component versions, and start timer event updates. Both producers and consumers must be configured to ensure proper communication between services.

General Kafka configuration

Environment VariableDescriptionExample ValueDefault Value
SPRING_KAFKA_BOOTSTRAP_SERVERSKafka broker addresseslocalhost:9092localhost:9092
SPRING_KAFKA_SECURITY_PROTOCOLSecurity protocolPLAINTEXTPLAINTEXT
KAFKA_MESSAGE_MAX_BYTESMaximum message size52428800 (50MB)52428800

Kafka producer configuration

Environment VariableDescriptionExample Value
SPRING_KAFKA_PRODUCER_KEY_SERIALIZERKey serializer classorg.apache.kafka.common.serialization.StringSerializer
SPRING_KAFKA_PRODUCER_MAX_REQUEST_SIZEMaximum request size52428800 (50MB)

Kafka consumer configuration

Environment VariableDescriptionDefault Value
KAFKA_CONSUMER_GROUP_ID_GENERIC_PROCESSINGGeneric processing consumer groupgenericProcessingGroup
KAFKA_CONSUMER_THREADS_GENERIC_PROCESSINGGeneric processing threads6
KAFKA_AUTH_EXCEPTION_RETRY_INTERVALAuth exception retry (seconds)10

Topic naming convention and pattern creation

The Admin microservice uses a consistent naming structure for Kafka topics to ensure standardized communication.

Environment VariableDescriptionDefault Value
KAFKA_TOPIC_NAMING_PACKAGEBase packageai.flowx.
KAFKA_TOPIC_NAMING_ENVIRONMENTEnvironment prefixdev.
KAFKA_TOPIC_NAMING_VERSIONTopic version.v1
KAFKA_TOPIC_NAMING_SEPARATORPrimary separator.
KAFKA_TOPIC_NAMING_SEPARATOR2Secondary separator-

Topics are constructed using the following pattern:

{prefix} + service + {separator/dot} + action + {separator/dot} + detail + {suffix}

Where:

  • prefix is ${kafka.topic.naming.package}${kafka.topic.naming.environment} (e.g., ai.flowx.dev.)
  • suffix is ${kafka.topic.naming.version} (e.g., .v1)

Kafka topic configuration

Audit topics

Environment VariableDescriptionPatternExample Value
KAFKA_TOPIC_AUDIT_OUTAudit output topic${kafka.topic.naming.prefix}core${dot}trigger${dot}save${dot}audit${kafka.topic.naming.suffix}ai.flowx.dev.core.trigger.save.audit.v1

Platform topics

Environment VariableDescriptionPatternExample Value
KAFKA_TOPIC_PLATFORM_COMPONENTS_VERSIONS_INComponents versions caching topic${kafka.topic.naming.prefix}core${dot}trigger${dot}platform${dot}versions${dot}caching${kafka.topic.naming.suffix}ai.flowx.dev.core.trigger.platform.versions.caching.v1

Events Gateway topics

Environment VariableDescriptionPatternExample Value
KAFKA_TOPIC_EVENTS_GATEWAY_OUT_MESSAGECommands message output topic${kafka.topic.naming.prefix}eventsgateway${dot}process${dot}commands${dot}message${kafka.topic.naming.suffix}ai.flowx.dev.eventsgateway.process.commands.message.v1

Build topics

Environment VariableDescriptionPatternExample Value
KAFKA_TOPIC_BUILD_START_TIMER_EVENTS_OUT_UPDATESStart timer events updates topic${kafka.topic.naming.prefix}build${dot}start${dash}timer${dash}events${dot}updates${dot}in${kafka.topic.naming.suffix}ai.flowx.dev.build.start-timer-events.updates.in.v1

Resource topics

Environment VariableDescriptionPatternExample Value
KAFKA_TOPIC_RESOURCES_USAGES_REFRESHResources usages refresh topic${kafka.topic.naming.prefix}application${dash}version${dot}resources${dash}usages${dot}refresh${kafka.topic.naming.suffix}ai.flowx.dev.application-version.resources-usages.refresh.v1

OAuth authentication (when using SASL_PLAINTEXT)

When using OAuth authentication with Kafka (SASL_PLAINTEXT), activate the kafka-auth profile and configure the following environment variables:

Environment VariableDescriptionExample Value
KAFKA_OAUTH_CLIENT_IDOAuth client IDkafka
KAFKA_OAUTH_CLIENT_SECRETOAuth client secretkafka-secret
KAFKA_OAUTH_TOKEN_ENDPOINT_URIOAuth token endpointkafka.auth.localhost

When using the kafka-auth profile, the security protocol will be set to SASL_PLAINTEXT and the appropriate SASL mechanism and configuration will be applied automatically.

Redis configuration

The following values should be set with the corresponding Redis-related values:

  • SPRING_REDIS_HOST
  • SPRING_REDIS_PASSWORD

Logging

The following environment variables could be set in order to control log levels:

  • LOGGING_LEVEL_ROOT - root Spring Boot microservice logs
  • LOGGING_LEVEL_APP - app level logs

Authorization & access roles

The following variables need to be set in order to connect to the identity management platform:

  • SECURITY_OAUTH2_BASE_SERVER_URL
  • SECURITY_OAUTH2_CLIENT_CLIENT_ID
  • SECURITY_OAUTH2_REALM

A specific service account should be configured in the OpenID provider to allow the Admin microservice to access realm-specific data. It can be configured using the following environment variables:

  • SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_ID - the openid service account username
  • SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET - the openid service account client secret

Configuration needed to clear the offline sessions of a user session from the identity provider solution:

  • FLOWX_AUTHENTICATE_CLIENTID

Configuring access rights for admin

Elasticsearch

  • SPRING_ELASTICSEARCH_REST_URIS
  • SPRING_ELASTICSEARCH_REST_DISABLESSL
  • SPRING_ELASTICSEARCH_INDEX_SETTINGS_NAME
  • SPRING_ELASTICSEARCH_REST_USERNAME
  • SPRING_ELASTICSEARCH_REST_PASSWORD

Undo/redo actions

flowx:
  undo-redo:
    ttl: 6000000  # Redis TTL for undoable actions by user+nodeid (in seconds)
    cleanup:
      cronExpression: "0 2 * * * *"  # Every day at 2am
      days: 2  # Items marked as deleted will be permanently removed if older than this number of days