Skip to main content
Available starting with FlowX.AI 5.5.0The Organization Manager is a new microservice responsible for organization and tenant management, including user registration, organization lifecycle, and platform component health monitoring.

Dependencies

Before setting up the Organization Manager, ensure you have the following dependencies in place:
  • PostgreSQL database for storing organization and tenant data
  • Kafka for event-driven communication with other FlowX.AI services
  • Redis for caching
  • Keycloak (or compatible OAuth2 provider) for authentication and authorization
  • SpiceDB for fine-grained authorization

Infrastructure prerequisites

ComponentDescription
PostgreSQLDedicated database for organization data
KafkaMessage broker for inter-service communication
RedisCaching layer for improved performance
KeycloakIdentity provider for service authentication
SpiceDBAuthorization service for fine-grained access control

Configuration

Authorization configuration

Environment VariableDescriptionDefault Value
SECURITY_TYPESecurity typejwt-public-key
SECURITY_OAUTH2_BASE_SERVER_URLBase URL of the OAuth2/OIDC server-
SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTIDClient ID for service accountflowx-organization-manager-sa
SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTSECRETClient secret for service account-
SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_MAINAUTHPROVIDER_TOKEN_URIProvider token URI${SECURITY_OAUTH2_BASE_SERVER_URL}/realms/${SECURITY_OAUTH2_SA_REALM}/protocol/openid-connect/token

PostgreSQL configuration

The Organization Manager uses its own dedicated PostgreSQL database.
Environment VariableDescriptionDefault Value
SPRING_DATASOURCE_URLJDBC connection URL for PostgreSQLjdbc:postgresql://postgresql:5432/organization_manager
SPRING_DATASOURCE_USERNAMEDatabase usernamepostgres
SPRING_DATASOURCE_PASSWORDDatabase password-
Ensure the database is created before deploying the service. The Organization Manager will manage its own schema migrations via Liquibase.

Redis configuration

Organization Manager uses Redis for caching. Configure Redis connection using the standard Redis environment variables. Quick reference:
Environment VariableDescriptionDefault Value
SPRING_REDIS_HOSTRedis server hostnameredis-master
SPRING_REDIS_PORTRedis server port6379
SPRING_REDIS_PASSWORDRedis authentication password-
For complete Redis configuration including Sentinel mode, Cluster mode, and SSL/TLS setup, see the Redis Configuration guide.

Kafka configuration

Core Kafka settings

Environment VariableDescriptionDefault Value
SPRING_KAFKA_BOOTSTRAP_SERVERSAddress of the Kafka server(s)localhost:9092
KAFKA_MESSAGE_MAX_BYTESMaximum message size (bytes)52428800 (50 MB)

Topic naming configuration

Environment VariableDescriptionDefault Value
KAFKA_TOPIC_NAMING_PACKAGEPackage prefix for topic namesai.flowx.
KAFKA_TOPIC_NAMING_ENVIRONMENTEnvironment segment for topic names
KAFKA_TOPIC_NAMING_VERSIONVersion suffix for topic names.v1
KAFKA_TOPIC_NAMING_SEPARATORPrimary separator for topic names.
KAFKA_TOPIC_NAMING_SEPARATOR2Secondary separator for topic names-

Kafka topics

The Organization Manager publishes organization lifecycle events:
Environment VariableDescriptionDefault Value
KAFKA_TOPIC_ORGANIZATION_EVENTS_OUTTopic for organization lifecycle eventsai.flowx.organization.events.v1

CAS lib configuration (SpiceDB)

Environment VariableDescriptionDefault Value
FLOWX_LIB_CASCLIENT_SPICEDB_HOSTSpiceDB hostnamespicedb
FLOWX_LIB_CASCLIENT_SPICEDB_PORTSpiceDB gRPC port50051
FLOWX_LIB_CASCLIENT_SPICEDB_TOKENSpiceDB authentication token-

Logging configuration

Environment VariableDescriptionDefault Value
LOGGING_LEVEL_ROOTRoot logging levelINFO
LOGGING_LEVEL_APPApplication-specific log levelINFO

Multipart upload configuration

Environment VariableDescriptionDefault Value
MULTIPART_MAX_FILE_SIZEMaximum file size per upload50MB
MULTIPART_MAX_REQUEST_SIZEMaximum total request size50MB

Secrets management

The Organization Manager requires several secrets to be configured. These should be stored securely and referenced via Kubernetes secrets or a secrets management solution.
Secret NameDescription
SPRING_DATASOURCE_PASSWORDPostgreSQL database password
SPRING_REDIS_PASSWORDRedis authentication password
SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTSECRETKeycloak service account secret
FLOWX_LIB_CASCLIENT_SPICEDB_TOKENSpiceDB authentication token

Deployment

Helm values example

Below is an example Helm values configuration for deploying the Organization Manager:
fullnameOverride: organization-manager

image:
  repository: <your-registry>/organization-manager

replicaCount: 1

env:
  SPRING_PROFILES_ACTIVE: production

  # PostgreSQL
  SPRING_DATASOURCE_URL: jdbc:postgresql://postgresql:5432/organization_manager
  SPRING_DATASOURCE_USERNAME: postgres

  # Kafka
  SPRING_KAFKA_BOOTSTRAP_SERVERS: kafka:9092

  # OAuth2
  SECURITY_TYPE: jwt-public-key
  SECURITY_OAUTH2_BASE_SERVER_URL: https://keycloak.example.com/auth

  # Redis
  SPRING_REDIS_HOST: redis-master

  # SpiceDB
  FLOWX_LIB_CASCLIENT_SPICEDB_HOST: spicedb
  FLOWX_LIB_CASCLIENT_SPICEDB_PORT: 50051

# Secrets configuration
extraEnvVarsMultipleSecretsCustomKeys:
  - name: postgresql-generic
    secrets:
      SPRING_DATASOURCE_PASSWORD: postgresql-password-key
  - name: redis-generic
    secrets:
      SPRING_REDIS_PASSWORD: redis-password
  - name: flowx-auth
    secrets:
      SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_MAINIDENTITY_CLIENTSECRET: keycloakOrgManagerClientSecret
  - name: spicedb-generic
    secrets:
      FLOWX_LIB_CASCLIENT_SPICEDB_TOKEN: spicedb-token

rbac:
  create: true

ingress:
  enabled: false

podLabels:
  flowx.ai/network-log: "true"
  flowx.ai/egress-s-kafka: "true"
  flowx.ai/egress-s-postgresql: "true"
  flowx.ai/routing-name: "organization-manager"
  flowx.ai/prometheus-scrape: "organization-manager"

Network policies

The Organization Manager requires network access to the following services:
ServicePurposePod Label
KafkaMessage broker communicationflowx.ai/egress-s-kafka
PostgreSQLPrimary data storageflowx.ai/egress-s-postgresql
RedisCachingflowx.ai/egress-s-redis
KeycloakAuthenticationflowx.ai/egress-s-keycloak
SpiceDBAuthorizationflowx.ai/egress-s-spicedb

Monitoring

The Organization Manager exposes Prometheus metrics for monitoring. Turn on scraping by setting the pod label:
podLabels:
  flowx.ai/prometheus-scrape: "organization-manager"

Health endpoints

EndpointDescription
/actuator/healthHealth check endpoint
/actuator/metricsPrometheus metrics endpoint
/actuator/infoApp info endpoint

Verify your setup

The Organization Manager pod is running and healthy: kubectl get pods -l app=organization-manager
The health endpoint returns HTTP 200: curl http://organization-manager:8080/actuator/health
Database migrations completed successfully — check pod logs for Liquibase: Update has been successful
SpiceDB connection is established — check pod logs for successful CAS client initialization
Kafka topic ai.flowx.organization.events.v1 exists and the service can publish to it

Troubleshooting

Symptoms: Service fails to start with database connection errors.Solutions:
  1. Verify the organization_manager database exists in PostgreSQL
  2. Check that the database user has appropriate permissions
  3. Ensure network connectivity between the pod and PostgreSQL service
  4. Verify the JDBC URL format is correct
Symptoms: Authorization errors or service fails to initialize CAS client.Solutions:
  1. Verify SpiceDB is running and reachable at the configured host and port
  2. Check that the SpiceDB token is correct
  3. Ensure network policies allow gRPC traffic to SpiceDB on port 50051
  4. Review pod logs for specific CAS client error messages
Symptoms: Organization events not reaching downstream services.Solutions:
  1. Verify Kafka bootstrap servers are reachable
  2. Check that the ai.flowx.organization.events.v1 topic exists
  3. Ensure the service has producer permissions on the topic
  4. Review KAFKA_MESSAGE_MAX_BYTES if large messages fail
Symptoms: 401/403 errors when communicating with other FlowX services.Solutions:
  1. Verify the Keycloak service account is properly configured
  2. Check that client secrets match between configuration and Keycloak
  3. Ensure the service account has required roles assigned
  4. Verify SECURITY_TYPE is set to jwt-public-key

Last modified on February 27, 2026