Infrastructure prerequisites

The Integration Designer service requires the following components to be set up before it can be started:

ComponentVersionPurpose
Kubernetes1.19+Container orchestration
PostgreSQL13+Advancing data source
MongoDB4.4+Integration configurations
Kafka2.8+Event-driven communication
OAuth2 Server-Authentication (Keycloak recommended)

Configuration

Core service configuration

Environment variableDescriptionExample value
CONFIG_PROFILESpring configuration profilesk8stemplate_v2,kafka-auth
LOGGING_LEVEL_APPApplication logging levelINFO

WebClient configuration

Integration Designer interacts with various APIs, some of which return large responses. To handle such cases efficiently, the FlowX WebClient buffer size must be configured to accommodate larger payloads, especially when working with legacy APIs that do not support pagination.

Environment variableDescriptionDefault value
FLOWX_WEBCLIENT_BUFFERSIZEBuffer size (in bytes) for FlowX WebClient1048576 (1MB)

If you encounter truncated API responses or unexpected errors when fetching large payloads, consider increasing the buffer size to at least 10MB by setting FLOWX_WEBCLIENT_BUFFERSIZE=10485760. This ensures smooth handling of large API responses, particularly for legacy APIs without pagination support.

Database configuration

PostgreSQL

Environment variableDescriptionExample value
ADVANCING_DATASOURCE_URLPostgreSQL JDBC URLjdbc:postgresql://postgresql:5432/advancing
ADVANCING_DATASOURCE_USERNAMEDatabase usernameflowx
ADVANCING_DATASOURCE_PASSWORDDatabase passwordsecurePassword
ADVANCING_DATASOURCE_DRIVER_CLASS_NAMEJDBC driverorg.postgresql.Driver

MongoDB

Integration Designer requires two MongoDB databases for managing integration-specific data and runtime data:

  • Integration Designer Database (integration-designer): Stores data specific to Integration Designer, such as integration configurations, metadata, and other operational data.
  • Shared Runtime Database (app-runtime): Shared across multiple services, this database manages runtime data essential for integration and data flow execution.
Environment variableDescriptionExample value
SPRING_DATA_MONGODB_URIIntegration Designer MongoDB URImongodb://mongodb-0.mongodb-headless:27017/integration-designer
MONGODB_USERNAMEMongoDB usernameintegration-designer
MONGODB_PASSWORDMongoDB passwordsecureMongoPass
SPRING_DATA_MONGODB_STORAGEStorage type (Azure environments only)mongodb (or cosmosdb)
SPRING_DATA_MONGODB_RUNTIME_URIRuntime MongoDB URImongodb://mongodb-0.mongodb-headless:27017/app-runtime
MONGODB_RUNTIME_USERNAMERuntime MongoDB usernameapp-runtime
MONGODB_RUNTIME_PASSWORDRuntime MongoDB passwordsecureRuntimePass

Integration Designer requires a runtime connection to function correctly. Starting the service without a configured and active runtime MongoDB connection is not supported.

Kafka configuration

Kafka connection and security variables

Environment variableDescriptionExample value
SPRING_KAFKA_BOOTSTRAP_SERVERSKafka broker addresseslocalhost:9092
SPRING_KAFKA_SECURITY_PROTOCOLSecurity protocolPLAINTEXT or SASL_PLAINTEXT
FLOWX_WORKFLOW_CREATETOPICSAuto-create topicsfalse (default)

Message size configuration

Environment variableDescriptionDefault value
KAFKA_MESSAGE_MAX_BYTESMaximum message size52428800 (50MB)

This setting affects:

  • Producer message max bytes
  • Producer max request size

Producer configuration

Environment variableDescriptionDefault value
SPRING_KAFKA_PRODUCER_KEY_SERIALIZERKey serializer classorg.apache.kafka.common.serialization.StringSerializer
SPRING_KAFKA_PRODUCER_VALUE_SERIALIZERValue serializer classorg.springframework.kafka.support.serializer.JsonSerializer

Consumer configuration

Environment variableDescriptionDefault value
KAFKA_CONSUMER_GROUP_ID_START_WORKFLOWSStart workflows consumer groupstart-workflows-group
KAFKA_CONSUMER_GROUP_ID_RES_ELEM_USAGE_VALIDATIONResource usage validation consumer groupintegration-designer-res-elem-usage-validation-group
KAFKA_CONSUMER_THREADS_START_WORKFLOWSStart workflows consumer threads3
KAFKA_CONSUMER_THREADS_RES_ELEM_USAGE_VALIDATIONResource usage validation consumer threads3
KAFKA_AUTH_EXCEPTION_RETRY_INTERVALRetry interval after authorization errors10 (seconds)

Topic naming convention and pattern creation

The Integration Designer uses a structured topic naming convention that follows a standardized pattern, ensuring consistency across environments and making topics easily identifiable.

Topic naming components
Environment variableDescriptionDefault value
KAFKA_TOPIC_NAMING_PACKAGEBase package for topicsai.flowx.
KAFKA_TOPIC_NAMING_ENVIRONMENTEnvironment identifierdev.
KAFKA_TOPIC_NAMING_VERSIONTopic version.v1
KAFKA_TOPIC_NAMING_SEPARATORTopic name separator.
KAFKA_TOPIC_NAMING_SEPARATOR2Alternative separator-
KAFKA_TOPIC_NAMING_ENGINERECEIVEPATTERNEngine receive patternengine.receive.
KAFKA_TOPIC_NAMING_INTEGRATIONRECEIVEPATTERNIntegration receive patternintegration.receive.

Topics are constructed using the following pattern:

{prefix} + service + {separator/dot} + action + {separator/dot} + detail + {suffix}

For example, a typical topic might look like:

ai.flowx.dev.eventsgateway.receive.workflowinstances.v1

Where:

  • ai.flowx.dev. is the prefix (package + environment)
  • eventsgateway is the service
  • receive is the action
  • workflowinstances is the detail
  • .v1 is the suffix (version)

Kafka topic configuration

Core topics
Environment variableDescriptionDefault Pattern
KAFKA_TOPIC_AUDIT_OUTTopic for sending audit logsai.flowx.dev.core.trigger.save.audit.v1
Events gateway topics
Environment variableDescriptionDefault Pattern
KAFKA_TOPIC_EVENTS_GATEWAY_OUT_MESSAGETopic for workflow instances communicationai.flowx.dev.eventsgateway.receive.workflowinstances.v1
Engine and Integration communication topics
Environment variableDescriptionDefault Pattern
KAFKA_TOPIC_ENGINEPATTERNPattern for Engine communicationai.flowx.dev.engine.receive.
KAFKA_TOPIC_INTEGRATIONPATTERNPattern for Integration communicationai.flowx.dev.integration.receive.*
Application resource usage topics
Environment variableDescriptionDefault Pattern
KAFKA_TOPIC_APPLICATION_IN_RES_ELEM_USAGE_VALIDATIONTopic for resource usage validation requestsai.flowx.dev.application-version.resources-usages.sub-res-validation.request-integration.v1
KAFKA_TOPIC_RESOURCES_USAGES_REFRESHTopic for resource usage refresh commandsai.flowx.dev.application-version.resources-usages.refresh.v1

OAuth authentication variables (when using SASL_PLAINTEXT)

Environment variableDescriptionExample value
KAFKA_OAUTH_CLIENT_IDOAuth client IDkafka
KAFKA_OAUTH_CLIENT_SECRETOAuth client secretkafka-secret
KAFKA_OAUTH_TOKEN_ENDPOINT_URIOAuth token endpointkafka.auth.localhost

Inter-Service topic coordination

When configuring Kafka topics in the FlowX ecosystem, ensure proper coordination between services:

  1. Topic name matching: Output topics from one service must match the expected input topics of another service.

  2. Pattern consistency: The pattern values must be consistent across services:

    • Process Engine listens to topics matching: ai.flowx.dev.engine.receive.*
    • Integration Designer listens to topics matching: ai.flowx.dev.integration.receive.*
  3. Communication flow:

    • Other services write to topics matching the Engine’s pattern → Process Engine listens
    • Process Engine writes to topics matching the Integration Designer’s pattern → Integration Designer listens

The exact pattern value isn’t critical, but it must be identical across all connected services. Some deployments require manually creating Kafka topics in advance rather than dynamically. In these cases, all topic names must be explicitly defined and coordinated.

Kafka topics best practices

Large message handling for workflow instances topic

The workflow instances topic requires special configuration to handle large messages. By default, Kafka has message size limitations that may prevent Integration Designer from processing large workflow payloads.

Recommended max.message.bytes value: 10485760 (10 MB)

  1. Access AKHQ

    • Open the AKHQ web interface
    • Log in if authentication is required
  2. Navigate to Topic

    • Go to the “Topics” section
    • Find the topic: ai.flowx.dev.eventsgateway.receive.workflowinstances.v1
  3. Edit Configuration

    • Click on the topic name
    • Go to the “Configuration” tab
    • Locate or add max.message.bytes
    • Set the value to 10485760
    • Save changes

Configuring authentication and access roles

Integration Designer uses OAuth2 for secure access control. Set up OAuth2 configurations with these environment variables:

Environment variableDescriptionExample value
SECURITY_OAUTH2_BASE_SERVER_URLBase URL for OAuth2 authorization serverhttps://keycloak.example.com/auth
SECURITY_OAUTH2_REALMRealm for OAuth2 authenticationflowx
SECURITY_OAUTH2_CLIENT_CLIENT_IDClient ID for Integration Designer OAuth2 clientintegration-designer
SECURITY_OAUTH2_CLIENT_CLIENT_SECRETClient Secret for Integration Designer OAuth2 clientclient-secret
SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_IDClient ID for admin service accountadmin-client
SECURITY_OAUTH2_SERVICE_ACCOUNT_ADMIN_CLIENT_SECRETClient Secret for admin service accountadmin-secret

For detailed instructions on configuring user roles and access rights, refer to:

Access Management

For configuring a service account, refer to:

Integration Designer service account

Configuring logging

To control the log levels for Integration Designer, set the following environment variables:

Environment variableDescriptionExample value
LOGGING_LEVEL_ROOTRoot Spring Boot logs levelINFO
LOGGING_LEVEL_APPApplication-level logs levelINFO

Configuring admin ingress

Integration Designer provides an admin ingress route, which can be enabled and customized with additional annotations for SSL certificates or routing preferences.

ingress:
  enabled: true
  admin:
    enabled: true
    hostname: "{{ .Values.flowx.ingress.admin }}"
    path: /integration(/|$)(.*)
    annotations:
      nginx.ingress.kubernetes.io/rewrite-target: /$2
      nginx.ingress.kubernetes.io/cors-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,platform,Flowx-Platform

Monitoring and maintenance

To monitor the performance and health of the Integration Designer, use tools like Prometheus or Grafana. Configure Prometheus metrics with:

Environment variableDescriptionDefault value
MANAGEMENT_PROMETHEUS_METRICS_EXPORT_ENABLEDEnable Prometheus metrics exportfalse

RBAC configuration

Integration Designer requires specific RBAC (Role-Based Access Control) permissions to access Kubernetes ConfigMaps and Secrets, which store necessary configurations and credentials. Set up these permissions by enabling RBAC and defining the required rules:

rbac:
  create: true
  rules:
    - apiGroups:
        - ""
      resources:
        - secrets
        - configmaps
        - pods
      verbs:
        - get
        - list
        - watch

This configuration grants read access (get, list, watch) to ConfigMaps, Secrets, and Pods, which is essential for retrieving application settings and credentials required by Integration Designer.

Additional resources