Integration designer setup
This guide provides step-by-step instructions to set up and configure the Integration Designer service, including database, Kafka, and OAuth2 authentication settings, to ensure integration and data flow management
Infrastructure prerequisites
The Integration Designer service requires the following components to be set up before it can be started:
- PostgreSQL - version 13 or higher for managing advancing data source
- MongoDB - version 4.4 or higher for managing integration and runtime data
- Kafka - version 2.8 or higher for event-driven communication between services
- OAuth2 Authentication - Ensure a Keycloak server or compatible OAuth2 authorization server is configured
Dependencies
Configuration
Database configuration
Integration Designer uses both PostgreSQL and MongoDB for managing advancing data and integration information. Configure these database connections with the following environment variables:
PostgreSQL (Advancing data source)
ADVANCING_DATASOURCE_URL
- Database URL for the advancing data source in PostgreSQLADVANCING_DATASOURCE_USERNAME
- Username for the advancing data source in PostgreSQL
MongoDB (Integration data and runtime data)
Integration Designer requires two MongoDB databases for managing integration-specific data and runtime data. The integration-designer
database is dedicated to Integration Designer, while the shared app-runtime
database supports multiple services.
- Integration Designer Database (
integration-designer
) - Stores data specific to Integration Designer, such as integration configurations and metadata. - Shared Runtime Database (
app-runtime
) - Used across multiple services for runtime data.
Set up these MongoDB connections with the following environment variables:
SPRING_DATA_MONGODB_URI
- URI for connecting to the Integration Designer MongoDB instance- Format:
mongodb://${DB_USERNAME}:${DB_PASSWORD}@<host1>,<host2>,<arbiter-host>:<port>/<database>?retryWrites=false
- Format:
DB_USERNAME
:integration-designer
SPRING_DATA_MONGODB_STORAGE
- Specifies the storage type used for the Runtime MongoDB instance (Azure environments only)- Possible Values:
mongodb
,cosmosdb
- Default Value:
mongodb
- Possible Values:
SPRING_DATA_MONGODB_RUNTIME_ENABLED
- Enables runtime MongoDB usage- Default Value:
true
- Default Value:
SPRING_DATA_MONGODB_RUNTIME_URI
- URI for connecting to MongoDB for Runtime MongoDB (app-runtime
)- Format:
SPRING_DATA_MONGODB_RUNTIME_URI
:mongodb://${DB_USERNAME}:${DB_PASSWORD}@<host1>,<host2>,<arbiter-host>:<port>/<database>??retryWrites=false
- Format:
DB_USERNAME
:app-runtime
Configuring Kafka
To configure Kafka for Integration Designer, set the following environment variables. This configuration includes naming patterns, consumer group settings, and retry intervals for authentication exceptions.
General Kafka configuration
SPRING_KAFKA_BOOTSTRAP_SERVERS
- Address of the Kafka server in the formathost:port
KAFKA_TOPIC_NAMING_ENVIRONMENT
- Environment-specific suffix for Kafka topicsFLOWX_WORKFLOW_CREATETOPICS
- To automatically create kafka topics for development environments- When set to true: In development environments, where Kafka topics may need to be created automatically, this configuration can be enabled (flowx.workflow.createTopics: true). This allows for the automatic creation of “in” and “out” topics when workflows are created, eliminating the need to wait for topic creation at runtime.
- Default setting (false): In production or controlled environments, where automated topic creation is not desired, this setting remains false to prevent unintended Kafka topic creation.
Kafka consumer settings
-
KAFKA_CONSUMER_GROUP_ID_START_WORKFLOWS
- Consumer group ID for starting workflows- Default Value:
start-workflows-group
- Default Value:
-
KAFKA_CONSUMER_THREADS_START_WORKFLOWS
- Number of Kafka consumer threads for starting workflows- Default Value:
3
- Default Value:
-
KAFKA_AUTH_EXCEPTION_RETRY_INTERVAL
- Interval (in seconds) between retries after anAuthorizationException
- Default Value:
10
- Default Value:
Kafka topic naming structure
The Kafka topics for Integration Designer use a structured naming convention with dynamic components, allowing for easy integration across environments. This setup defines separators, environment identifiers, and specific naming patterns for both engine and integration-related messages.
Topic naming components
Component | Description | Default Value |
---|---|---|
package | Package identifier for namespace | ai.flowx. |
environment | Environment identifier | dev. |
version | Version identifier for topic compatibility | .v1 |
separator | Primary separator for components | . |
separator2 | Secondary separator for additional distinction | - |
prefix | Combines package and environment as a topic prefix | ${kafka.topic.naming.package}${kafka.topic.naming.environment} |
suffix | Appends version to the end of the topic name | ${kafka.topic.naming.version} |
Predefined patterns for services
-
Engine Receive Pattern -
kafka.topic.naming.engineReceivePattern
- Pattern:
engine${dot}receive${dot}
- Example Topic Prefix:
ai.flowx.dev.engine.receive.
- Pattern:
-
Integration Receive Pattern -
kafka.topic.naming.integrationReceivePattern
- Pattern:
integration${dot}receive${dot}
- Example Topic Prefix:
ai.flowx.dev.integration.receive.
- Pattern:
Kafka topics
-
Events Gateway - Outgoing Messages
- Topic:
${kafka.topic.naming.prefix}eventsgateway${dot}receive${dot}workflowinstances${kafka.topic.naming.suffix}
- Purpose: Topic for outgoing workflow instance messages from the events gateway
- Example Value:
ai.flowx.dev.eventsgateway.receive.workflowinstances.v1
- Topic:
-
Engine Pattern
- Pattern:
${kafka.topic.naming.prefix}${kafka.topic.naming.engineReceivePattern}
- Purpose: Topic pattern for receiving messages by the engine service
- Example Value:
ai.flowx.dev.engine.receive.*
- Pattern:
-
Integration Pattern
- Pattern:
${kafka.topic.naming.prefix}${kafka.topic.naming.integrationReceivePattern}*
- Purpose: Topic pattern for receiving messages by the integration service
- Example Value:
ai.flowx.dev.integration.receive.*
- Pattern:
Replace placeholders with appropriate values for your environment before starting the service.
Configuring authentication and access roles
Integration Designer uses OAuth2 for secure access control. Set up OAuth2 configurations with these environment variables:
SECURITY_OAUTH2_BASE_SERVER_URL
- Base URL for the OAuth 2.0 Authorization ServerSECURITY_OAUTH2_CLIENT_CLIENT_ID
- Unique identifier for the client application registered with the OAuth 2.0 serverSECURITY_OAUTH2_CLIENT_CLIENT_SECRET
- Secret key for authenticating requests made by the authorization clientSECURITY_OAUTH2_REALM
- The realm name for OAuth2 authenticationSECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_ID
- Client ID for the integration designer service accountSECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET
- Client Secret for the integration designer service account
Refer to the dedicated section for configuring user roles and access rights:
Access Management
Refer to the dedicated section for configuring a service account:
Integration Designer service account
Authentication and access roles
SECURITY_OAUTH2_BASE_SERVER_URL
- Base URL for the OAuth2 authorization serverSECURITY_OAUTH2_REALM
- Realm for OAuth2 authenticationSECURITY_OAUTH2_CLIENT_CLIENT_ID
- Client ID for the Integration Designer OAuth2 clientSECURITY_OAUTH2_CLIENT_CLIENT_SECRET
- Client Secret for the Integration Designer OAuth2 clientSECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_ID
- Client ID for the Keycloak admin service accountSECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRET
- Client Secret for the Keycloak admin service account
Configuring loogging
To control the log levels for Integration Designer, set the following environment variables:
LOGGING_LEVEL_ROOT
- The log level for root Spring Boot microservice logsLOGGING_LEVEL_APP
- The log level for application-level logs
Configuring admin ingress
Integration Designer provides an admin ingress route, which can be enabled and customized with additional annotations for SSL certificates or routing preferences.
- Enabled: Set to
true
to enable the admin ingress route. - Hostname: Define the hostname for admin access.
Monitoring and maintenance
To monitor the performance and health of the Application Manager, use tools like Prometheus or Grafana. Configure Prometheus metrics with the following environment variable:
MANAGEMENT_PROMETHEUS_METRICS_EXPORT_ENABLED
- Enables or disables Prometheus metrics export (default: false).
RBAC configuration
Integration Designer requires specific RBAC (Role-Based Access Control) permissions to access Kubernetes ConfigMaps and Secrets, which store necessary configurations and credentials. Set up these permissions by enabling RBAC and defining the required rules.
rbac.create
: Set to true to create RBAC resources.rbac.rules
: Define custom RBAC rules as follows:
This configuration grants read access (get
, list
, watch
) to ConfigMaps, Secrets, and Pods, which is essential for retrieving application settings and credentials required by Integration Designer.
Was this page helpful?