Ensure the following infrastructure components are available before starting the CMS service:

  • Docker Engine: Version 17.06 or higher.
  • MongoDB: Version 4.4 or higher for storing taxonomies and content.
  • Redis: Version 6.0 or higher.
  • Kafka: Version 2.8 or higher.
  • Elasticsearch: Version 7.11.0 or higher.

The service is pre-configured with most default values. However, some environment variables require customization during setup.

Dependencies overview

Configuration

Set application defaults

Define the default application name for retrieving content:

application:
    defaultApplication: ${DEFAULT_APPLICATION:flowx}

If this configuration is not provided, the default value will be set to flowx.

Configuring authorization & access roles

Connect the CMS to an OAuth 2.0 identity management platform by setting the following variables:

Environment variableDescription
SECURITY_OAUTH2_BASE_SERVER_URLBase URL for the OAuth 2.0 Authorization Server
SECURITY_OAUTH2_CLIENT_CLIENT_IDUnique identifier for the client application
SECURITY_OAUTH2_CLIENT_CLIENT_SECRETSecret key to authenticate client requests
SECURITY_OAUTH2_REALMRealm name for OAuth 2.0 provider authentication

For detailed role and access configuration, refer to:

FlowX CMS access rights

Configuring MongoDB

The CMS requires MongoDB for taxonomy and content storage. Configure MongoDB with the following variables:

Environment variableDescriptionDefault value
SPRING_DATA_MONGODB_URIURI for connecting to the CMS MongoDB instanceFormat: mongodb://${DB_USERNAME}:${DB_PASSWORD}@<host1>,<host2>,<arbiter-host>:<port>/${DB_NAME}?retryWrites=false
DB_USERNAMEMongoDB usernamecms-core
DB_NAMEMongoDB database namecms-core
DB_PASSWORDMongoDB password
MONGOCK_TRANSACTIONENABLEDEnables transactions in MongoDB for Mongock libraryfalse (Set to false to support successful migrations)

Set MONGOCK_TRANSACTIONENABLED to false due to known issues with transactions in MongoDB version 5.

Configuring MongoDB (runtime database - additional data)

CMS also connects to a Runtime MongoDB instance for operational data:

Environment variableDescriptionDefault value
SPRING_DATA_MONGODB_RUNTIME_URIURI for connecting to Runtime MongoDBFormat: mongodb://${RUNTIME_DB_USERNAME}:${RUNTIME_DB_PASSWORD}@<host1>,<host2>,<arbiter-host>:<port>/${RUNTIME_DB_NAME}?retryWrites=false
RUNTIME_DB_USERNAMERuntime MongoDB usernameapp-runtime
RUNTIME_DB_NAMERuntime MongoDB database nameapp-runtime
RUNTIME_DB_PASSWORDRuntime MongoDB password
SPRING_DATA_MONGODB_STORAGEStorage type for Runtime MongoDB (Azure environments only)mongodb (Options: mongodb, cosmosdb)

Configuring Redis

The service can use the same Redis component deployed for the engine. See Redis Configuration.

Environment variableDescription
SPRING_REDIS_HOSTHostname or IP of the Redis server
SPRING_REDIS_PASSWORDAuthentication password for Redis
REDIS_TTLMaximum time-to-live for Redis cache keys (in seconds)

Configuring Kafka

Connection settings

Environment variableDescriptionDefault value
SPRING_KAFKA_BOOTSTRAP_SERVERSAddress of the Kafka serverlocalhost:9092
SPRING_KAFKA_SECURITY_PROTOCOLSecurity protocol for Kafka"PLAINTEXT"

Consumer and producer configuration

Environment variableDescriptionDefault value
SPRING_KAFKA_CONSUMER_GROUP_IDDefines the Kafka consumer group
KAFKA_CONSUMER_THREADSNumber of Kafka consumer threads
KAFKA_AUTH_EXCEPTION_RETRY_INTERVALRetry interval after an authorization exception10
KAFKA_MESSAGE_MAX_BYTESMaximum message size in bytes52428800 (50MB)

Consumer group configuration

Environment variableDescriptionDefault value
KAFKA_CONSUMER_GROUP_ID_CONTENT_TRANSLATEGroup ID for content translationcms-consumer-preview
KAFKA_CONSUMER_GROUP_ID_RES_USAGE_VALIDATIONGroup ID for resource usage validationcms-res-usage-validation-group

Consumer thread configuration

Environment variableDescriptionDefault value
KAFKA_CONSUMER_THREADS_CONTENT_TRANSLATEThreads for content translation1
KAFKA_CONSUMER_THREADS_RES_USAGE_VALIDATIONThreads for resource usage validation2

Topic configuration

The suggested topic pattern naming convention:

topic:
  naming:
    package: "ai.flowx."
    environment: "dev."
    version: ".v1"
    prefix: ${kafka.topic.naming.package}${kafka.topic.naming.environment}
    suffix: ${kafka.topic.naming.version}
    engineReceivePattern: engine.receive.

Content request topics

Environment variableDescriptionDefault value
KAFKA_TOPIC_REQUEST_CONTENT_INTopic for incoming content retrieval requestsai.flowx.dev.plugin.cms.trigger.retrieve.content.v1
KAFKA_TOPIC_REQUEST_CONTENT_OUTTopic for content retrieval resultsai.flowx.dev.engine.receive.plugin.cms.retrieve.content.results.v1

Audit topics

Environment variableDescriptionDefault value
KAFKA_TOPIC_AUDIT_OUTTopic for sending audit logsai.flowx.dev.core.trigger.save.audit.v1

Application resource usage validation

Environment variableDescriptionDefault value
KAFKA_TOPIC_APPLICATION_IN_RESUSAGEVALIDATIONTopic for resource usage validationai.flowx.dev.application-version.resources-usages.sub-res-validation.cms.v1

All actions that match a configured pattern will be consumed by the engine.

Inter-Service topic coordination

When configuring Kafka topics in the FlowX ecosystem, it’s critical to ensure proper coordination between services:

  1. Topic name matching: Output topics from one service must match the expected input topics of another service.

    For example:

    • KAFKA_TOPIC_APPLICATION_RESOURCE_RESELEMUSAGEVALIDATION_OUT_CMS on Application Manager must match KAFKA_TOPIC_APPLICATION_IN_RESUSAGEVALIDATION on CMS
  2. Pattern consistency: The pattern values must be consistent across services:

    • Process Engine listens to topics matching: ai.flowx.dev.engine.receive.*
    • Integration Designer listens to topics matching: ai.flowx.dev.integration.receive.*
  3. Communication flow:

    • Other services write to topics matching the Engine’s pattern → Process Engine listens
    • Process Engine writes to topics matching the Integration Designer’s pattern → Integration Designer listens

The exact pattern value isn’t critical, but it must be identical across all connected services. Some deployments require manually creating Kafka topics in advance rather than dynamically. In these cases, all topic names must be explicitly defined and coordinated.

Kafka authentication

For secure environments, enable OAuth authentication with the following configuration:

spring.config.activate.on-profile: kafka-auth

spring:
  kafka:
    security.protocol: "SASL_PLAINTEXT"
    properties:
      sasl:
        mechanism: "OAUTHBEARER"
        jaas.config: "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"${KAFKA_OAUTH_CLIENT_ID:kafka}\" oauth.client.secret=\"${KAFKA_OAUTH_CLIENT_SECRET:kafka-secret}\"  oauth.token.endpoint.uri=\"${KAFKA_OAUTH_TOKEN_ENDPOINT_URI:kafka.auth.localhost}\" ;"
        login.callback.handler.class: io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler

Required environment variables:

  • KAFKA_OAUTH_CLIENT_ID
  • KAFKA_OAUTH_CLIENT_SECRET
  • KAFKA_OAUTH_TOKEN_ENDPOINT_URI

Configuring logging

Environment variableDescription
LOGGING_LEVEL_ROOTLog level for root service logs
LOGGING_LEVEL_APPLog level for application-specific logs
LOGGING_LEVEL_MONGO_DRIVERLog level for the MongoDB driver

Configuring file storage

Public storage

Environment variableDescription
APPLICATION_FILE_STORAGE_S3_SERVER_URLURL of S3 server for file storage
APPLICATION_FILE_STORAGE_S3_BUCKET_NAMES3 bucket name
APPLICATION_FILE_STORAGE_S3_ROOT_DIRECTORYRoot directory in S3 bucket
APPLICATION_FILE_STORAGE_S3_CREATE_BUCKETAuto-create bucket if it doesn’t exist (true/false)
APPLICATION_FILE_STORAGE_S3_PUBLIC_URLPublic URL for accessing files

Private storage

Private CMS securely stores uploaded documents and AI-generated documents, ensuring they are accessible only via authenticated endpoints.

Private CMS ensures secure file storage by keeping documents hidden from the Media Library and accessible only through authenticated endpoints with access token permissions. Files can be retrieved using tags (e.g., ai_document, ref:UUID_doc) and are excluded from application builds.

Environment variableDescription
APPLICATION_FILE_STORAGE_S3_PRIVATE_SERVER_URLURL of S3 server for private storage
APPLICATION_FILE_STORAGE_S3_PRIVATE_BUCKET_NAMES3 bucket name for private storage
APPLICATION_FILE_STORAGE_S3_PRIVATE_CREATE_BUCKETAuto-create private bucket (true/false)
APPLICATION_FILE_STORAGE_S3_PRIVATE_ACCESS_KEYAccess key for private S3 server
APPLICATION_FILE_STORAGE_S3_PRIVATE_SECRET_KEYSecret key for private S3 server

Configuring file upload size

Environment variableDescriptionDefault value
SPRING_SERVLET_MULTIPART_MAX_FILE_SIZEMaximum file size for uploads50MB
SPRING_SERVLET_MULTIPART_MAX_REQUEST_SIZEMaximum request size for uploads50MB

Setting high file size limits may increase vulnerability to potential attacks. Consider security implications before increasing these limits.

Configuring application management

The following configuration from versions before 4.1 will be deprecated in version 5.0:

  • MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED: Enables or disables Prometheus metrics export.

Starting from version 4.1, use the following configuration. This setup is backwards compatible until version 5.0.

Environment variableDescriptionDefault value
MANAGEMENT_PROMETHEUS_METRICS_EXPORT_ENABLEDEnables Prometheus metrics exportfalse