FlowX CMS setup
The CMS service is a microservice designed for managing taxonomies and content inside an application. Delivered as a Docker image, it simplifies content editing and analysis. This guide provides step-by-step instructions for setting up the service and configuring it to suit your needs.
Ensure the following infrastructure components are available before starting the CMS service:
- Docker Engine: Version 17.06 or higher.
- MongoDB: Version 4.4 or higher for storing taxonomies and content.
- Redis: Version 6.0 or higher.
- Kafka: Version 2.8 or higher.
- Elasticsearch: Version 7.11.0 or higher.
The service is pre-configured with most default values. However, some environment variables require customization during setup.
Dependencies overview
Configuration
Set application defaults
Define the default application name for retrieving content:
If this configuration is not provided, the default value will be set to flowx
.
Configuring authorization & access roles
Connect the CMS to an OAuth 2.0 identity management platform by setting the following variables:
Environment variable | Description |
---|---|
SECURITY_OAUTH2_BASE_SERVER_URL | Base URL for the OAuth 2.0 Authorization Server |
SECURITY_OAUTH2_CLIENT_CLIENT_ID | Unique identifier for the client application |
SECURITY_OAUTH2_CLIENT_CLIENT_SECRET | Secret key to authenticate client requests |
SECURITY_OAUTH2_REALM | Realm name for OAuth 2.0 provider authentication |
For detailed role and access configuration, refer to:
FlowX CMS access rights
Configuring MongoDB
The CMS requires MongoDB for taxonomy and content storage. Configure MongoDB with the following variables:
Environment variable | Description | Default value |
---|---|---|
SPRING_DATA_MONGODB_URI | URI for connecting to the CMS MongoDB instance | Format: mongodb://${DB_USERNAME}:${DB_PASSWORD}@<host1>,<host2>,<arbiter-host>:<port>/${DB_NAME}?retryWrites=false |
DB_USERNAME | MongoDB username | cms-core |
DB_NAME | MongoDB database name | cms-core |
DB_PASSWORD | MongoDB password | |
MONGOCK_TRANSACTIONENABLED | Enables transactions in MongoDB for Mongock library | false (Set to false to support successful migrations) |
Set MONGOCK_TRANSACTIONENABLED
to false
due to known issues with transactions in MongoDB version 5.
Configuring MongoDB (runtime database - additional data)
CMS also connects to a Runtime MongoDB instance for operational data:
Environment variable | Description | Default value |
---|---|---|
SPRING_DATA_MONGODB_RUNTIME_URI | URI for connecting to Runtime MongoDB | Format: mongodb://${RUNTIME_DB_USERNAME}:${RUNTIME_DB_PASSWORD}@<host1>,<host2>,<arbiter-host>:<port>/${RUNTIME_DB_NAME}?retryWrites=false |
RUNTIME_DB_USERNAME | Runtime MongoDB username | app-runtime |
RUNTIME_DB_NAME | Runtime MongoDB database name | app-runtime |
RUNTIME_DB_PASSWORD | Runtime MongoDB password | |
SPRING_DATA_MONGODB_STORAGE | Storage type for Runtime MongoDB (Azure environments only) | mongodb (Options: mongodb , cosmosdb ) |
Configuring Redis
The service can use the same Redis component deployed for the engine. See Redis Configuration.
Environment variable | Description |
---|---|
SPRING_REDIS_HOST | Hostname or IP of the Redis server |
SPRING_REDIS_PASSWORD | Authentication password for Redis |
REDIS_TTL | Maximum time-to-live for Redis cache keys (in seconds) |
Configuring Kafka
Connection settings
Environment variable | Description | Default value |
---|---|---|
SPRING_KAFKA_BOOTSTRAP_SERVERS | Address of the Kafka server | localhost:9092 |
SPRING_KAFKA_SECURITY_PROTOCOL | Security protocol for Kafka | "PLAINTEXT" |
Consumer and producer configuration
Environment variable | Description | Default value |
---|---|---|
SPRING_KAFKA_CONSUMER_GROUP_ID | Defines the Kafka consumer group | |
KAFKA_CONSUMER_THREADS | Number of Kafka consumer threads | |
KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL | Retry interval after an authorization exception | 10 |
KAFKA_MESSAGE_MAX_BYTES | Maximum message size in bytes | 52428800 (50MB) |
Consumer group configuration
Environment variable | Description | Default value |
---|---|---|
KAFKA_CONSUMER_GROUP_ID_CONTENT_TRANSLATE | Group ID for content translation | cms-consumer-preview |
KAFKA_CONSUMER_GROUP_ID_RES_USAGE_VALIDATION | Group ID for resource usage validation | cms-res-usage-validation-group |
Consumer thread configuration
Environment variable | Description | Default value |
---|---|---|
KAFKA_CONSUMER_THREADS_CONTENT_TRANSLATE | Threads for content translation | 1 |
KAFKA_CONSUMER_THREADS_RES_USAGE_VALIDATION | Threads for resource usage validation | 2 |
Topic configuration
The suggested topic pattern naming convention:
Content request topics
Environment variable | Description | Default value |
---|---|---|
KAFKA_TOPIC_REQUEST_CONTENT_IN | Topic for incoming content retrieval requests | ai.flowx.dev.plugin.cms.trigger.retrieve.content.v1 |
KAFKA_TOPIC_REQUEST_CONTENT_OUT | Topic for content retrieval results | ai.flowx.dev.engine.receive.plugin.cms.retrieve.content.results.v1 |
Audit topics
Environment variable | Description | Default value |
---|---|---|
KAFKA_TOPIC_AUDIT_OUT | Topic for sending audit logs | ai.flowx.dev.core.trigger.save.audit.v1 |
Application resource usage validation
Environment variable | Description | Default value |
---|---|---|
KAFKA_TOPIC_APPLICATION_IN_RESUSAGEVALIDATION | Topic for resource usage validation | ai.flowx.dev.application-version.resources-usages.sub-res-validation.cms.v1 |
All actions that match a configured pattern will be consumed by the engine.
Inter-Service topic coordination
When configuring Kafka topics in the FlowX ecosystem, it’s critical to ensure proper coordination between services:
-
Topic name matching: Output topics from one service must match the expected input topics of another service.
For example:
KAFKA_TOPIC_APPLICATION_RESOURCE_RESELEMUSAGEVALIDATION_OUT_CMS
on Application Manager must matchKAFKA_TOPIC_APPLICATION_IN_RESUSAGEVALIDATION
on CMS
-
Pattern consistency: The pattern values must be consistent across services:
- Process Engine listens to topics matching:
ai.flowx.dev.engine.receive.*
- Integration Designer listens to topics matching:
ai.flowx.dev.integration.receive.*
- Process Engine listens to topics matching:
-
Communication flow:
- Other services write to topics matching the Engine’s pattern → Process Engine listens
- Process Engine writes to topics matching the Integration Designer’s pattern → Integration Designer listens
The exact pattern value isn’t critical, but it must be identical across all connected services. Some deployments require manually creating Kafka topics in advance rather than dynamically. In these cases, all topic names must be explicitly defined and coordinated.
Kafka authentication
For secure environments, enable OAuth authentication with the following configuration:
Required environment variables:
KAFKA_OAUTH_CLIENT_ID
KAFKA_OAUTH_CLIENT_SECRET
KAFKA_OAUTH_TOKEN_ENDPOINT_URI
Configuring logging
Environment variable | Description |
---|---|
LOGGING_LEVEL_ROOT | Log level for root service logs |
LOGGING_LEVEL_APP | Log level for application-specific logs |
LOGGING_LEVEL_MONGO_DRIVER | Log level for the MongoDB driver |
Configuring file storage
Public storage
Environment variable | Description |
---|---|
APPLICATION_FILE_STORAGE_S3_SERVER_URL | URL of S3 server for file storage |
APPLICATION_FILE_STORAGE_S3_BUCKET_NAME | S3 bucket name |
APPLICATION_FILE_STORAGE_S3_ROOT_DIRECTORY | Root directory in S3 bucket |
APPLICATION_FILE_STORAGE_S3_CREATE_BUCKET | Auto-create bucket if it doesn’t exist (true /false ) |
APPLICATION_FILE_STORAGE_S3_PUBLIC_URL | Public URL for accessing files |
Private storage
Private CMS securely stores uploaded documents and AI-generated documents, ensuring they are accessible only via authenticated endpoints.
Private CMS ensures secure file storage by keeping documents hidden from the Media Library and accessible only through authenticated endpoints with access token permissions. Files can be retrieved using tags (e.g., ai_document, ref:UUID_doc) and are excluded from application builds.
Environment variable | Description |
---|---|
APPLICATION_FILE_STORAGE_S3_PRIVATE_SERVER_URL | URL of S3 server for private storage |
APPLICATION_FILE_STORAGE_S3_PRIVATE_BUCKET_NAME | S3 bucket name for private storage |
APPLICATION_FILE_STORAGE_S3_PRIVATE_CREATE_BUCKET | Auto-create private bucket (true /false ) |
APPLICATION_FILE_STORAGE_S3_PRIVATE_ACCESS_KEY | Access key for private S3 server |
APPLICATION_FILE_STORAGE_S3_PRIVATE_SECRET_KEY | Secret key for private S3 server |
Configuring file upload size
Environment variable | Description | Default value |
---|---|---|
SPRING_SERVLET_MULTIPART_MAX_FILE_SIZE | Maximum file size for uploads | 50MB |
SPRING_SERVLET_MULTIPART_MAX_REQUEST_SIZE | Maximum request size for uploads | 50MB |
Setting high file size limits may increase vulnerability to potential attacks. Consider security implications before increasing these limits.
Configuring application management
The following configuration from versions before 4.1 will be deprecated in version 5.0:
MANAGEMENT_METRICS_EXPORT_PROMETHEUS_ENABLED
: Enables or disables Prometheus metrics export.
Starting from version 4.1, use the following configuration. This setup is backwards compatible until version 5.0.
Environment variable | Description | Default value |
---|---|---|
MANAGEMENT_PROMETHEUS_METRICS_EXPORT_ENABLED | Enables Prometheus metrics export | false |
Was this page helpful?