Available starting with FlowX.AI 5.5.0The AI Platform is a suite of 23 microservices (Java and Python) that power config-time AI agents, business agents, knowledge management, and conversational AI capabilities.
Overview
The AI Platform consists of three layers:- Java services — Core platform services handling data, orchestration, and storage (gRPC + GraphQL)
- Python services — AI agent services for code generation, analysis, design, and knowledge processing (REST + gRPC)
- Event-driven workers — Background services consuming Kafka topics for indexing, OCR, and replication
Infrastructure requirements
DGraph
Graph database for knowledge storage. Requires 3 Alpha + 3 Zero nodes for HA.
Qdrant
Vector database for embeddings. Cluster mode recommended for production.
S3-compatible storage
Object storage for binaries and files. Any S3-compatible provider works (MinIO, AWS S3, etc.).
Kafka
Message broker for event-driven communication. KRaft mode supported.
Keycloak
Identity provider for OAuth2 authentication across all services.
SpiceDB
Fine-grained authorization system for access control.
Service architecture
Java services
| Service | Default Port | Protocol | Purpose |
|---|---|---|---|
| Connected Graph | 9100 | GraphQL | API gateway and service orchestrator |
| Agents | 9101 | gRPC | Agent lifecycle management |
| Binaries | 9102 | gRPC | File and binary artifact storage |
| Conversations | 9103 | gRPC | Conversation management |
| Models | 9104 | gRPC | LLM model registry and configuration |
| Tenants | 9105 | gRPC | Multi-tenant management |
| Knowledge Graph (KAG) | 9106 | gRPC | Knowledge graph ingestion |
| MCP | 9108 | gRPC | Model Context Protocol integration |
Python services
| Service | Default Port | Protocol | Purpose |
|---|---|---|---|
| Planner | 9150 | gRPC | Intent understanding and task orchestration |
| AI Developer | 9151 | REST | Code generation (config-time agent) |
| AI Analyst | 9152 | REST | Process analysis (config-time agent) |
| AI Designer | 9153 | REST | UI generation (config-time agent) |
| Agent Builder | 9154 | REST | Business agent builder |
| Knowledgebase RAG | 9155 | gRPC | Retrieval-augmented generation |
| Embedder | 9156 | gRPC | Embedding generation |
| Knowledgebase | 9109 | gRPC | Knowledge base operations |
Event-driven workers
These services have no exposed ports and consume from Kafka topics:| Service | Trigger | Purpose |
|---|---|---|
| Knowledgebase Indexer v2 | ai.flowx.ai-platform.internal.binaries.lifecycle | Document vector indexing |
| OCR | ai.flowx.ai-platform.internal.ocr.commands | Document text extraction |
| Tenants Replicator | ai.flowx.organization.events.v1 | Organization event replication |
In production Kubernetes deployments, all services default to port 9100 via the
SERVICE_PORT variable. The ports listed above are the defaults for local development with Docker Compose.Environment variables
- Service discovery
- Authentication
- Infrastructure
- AI models
- Observability
- Service endpoints
These variables control how services locate each other within the cluster:
Kubernetes deployment:Docker Compose / local deployment:
| Environment Variable | Description | Default Value |
|---|---|---|
GRPC_HOST_RESOLVER | Service discovery method | k8s |
GRPC_HOST_RESOLVER_HELM_CHART | Helm chart name for K8s service resolution | — |
GRPC_HOST_RESOLVER_FIXED_IP | Fixed IP/hostname when using host resolver | ai-platform |
SERVICE_PORT | Port the service listens on | 9100 (production) |
Kafka topics
The AI Platform uses the following internal Kafka topics:| Topic | Partitions | Purpose |
|---|---|---|
ai.flowx.ai-platform.internal.binaries.lifecycle | 10 | Binary upload events triggering indexing |
ai.flowx.organization.events.v1 | 10 | Organization lifecycle events for tenant replication |
ai.flowx.ai-platform.internal.ocr.commands | 10 | OCR processing commands |
ai.flowx.ai-platform.internal.ocr.progress | 10 | OCR processing progress updates |
For production environments, create these topics manually with appropriate replication factors. For development, Kafka auto-topic creation handles them automatically.
Deployment
- Kubernetes (Helm)
- Docker Compose
The AI Platform ships as an umbrella Helm chart aggregating all 23 microservices and infrastructure dependencies.Install or upgrade:After deployment, initialize the platform:
Global configuration:
Key Helm values
Replica counts:| Service | Default Replicas |
|---|---|
| Connected Graph | 1 |
| Knowledge Graph | 2 |
| Agents | 2 |
| Models | 2 |
| Tenants | 2 |
| Planner | 2 |
| AI Developer | 2 |
| AI Analyst | 2 |
| AI Designer | 2 |
| Agent Builder | 2 |
Storage requirements
| Component | Default Persistent Volume | Notes |
|---|---|---|
| DGraph Alpha | 30Gi per node | 3 nodes in HA mode |
| DGraph Zero | Ephemeral | Consensus nodes, no persistent data |
| Qdrant data | 30Gi | Vector embeddings |
| Qdrant snapshots | 30Gi | Backup snapshots |
| MinIO | Distributed across 4 nodes | Configured per deployment |
| Kafka | 1Gi minimum | Adjust based on message volume |
Troubleshooting
Service discovery issues
Service discovery issues
Kubernetes DNS resolution:Common causes:
- Incorrect
GRPC_HOST_RESOLVER_HELM_CHARTvalue - Services not in the same namespace
- DNS not resolving due to CoreDNS issues
Database connectivity
Database connectivity
DGraph health check:Qdrant health check:Common causes:
- Incorrect
DGRAPH_CONNECTION_GRPC_ENDPOINT(must include all Alpha nodes for HA) - Missing
QDRANT_CONNECTION_API_KEY - Qdrant cluster not fully initialized
Kafka connectivity
Kafka connectivity
Verify broker availability:Verify topics exist:Common causes:
- Wrong
KAFKA_BOOTSTRAP_SERVERSaddress - Topics not auto-created and not manually provisioned
- Security mode mismatch (
KAFKA_SECURITY_MODE)
Authentication problems
Authentication problems
Verify Keycloak connectivity:Common causes:
- Incorrect
SECURITY_OAUTH2_BASE_SERVER_URL - Realm name mismatch
- Client ID not registered in Keycloak
- SpiceDB token expired or misconfigured
AI model configuration errors
AI model configuration errors
If agents fail to start with model configuration errors:
- Verify
OPENAI_API_KEYis set and valid - Check that
AI_<AGENT>_MODELvalues are valid base64 - Decode and validate the JSON structure matches the expected format
- Ensure
MODEL_TYPEmatches the configured provider

