This guide provides essential information for clients who need to configure and connect to the FlowX.AI Platform services.
Service endpoints
Java services
| Service Name | Default Port | Protocol | Purpose |
|---|
| Connected Graph | 9100 | GraphQL | Knowledge graph queries |
| Agents | 9101 | gRPC | Agent management |
| Binaries | 9102 | gRPC | File storage |
| Conversations | 9103 | gRPC | Conversation management |
| Models | 9104 | gRPC | AI model configuration |
| Tenants | 9105 | gRPC | Multi-tenant management |
| Knowledge Graph (KAG) | 9106 | gRPC | Knowledge ingestion |
Python services
| Service Name | Default Port | Protocol | Purpose |
|---|
| Planner | 9150 | gRPC | Task orchestration |
| AI Developer | 9151 | REST | Code generation |
| AI Analyst | 9152 | REST | Process analysis |
| AI Designer | 9153 | REST | UI generation |
Production Note: In production deployments, all services run on port 9100. The ports listed above are for local development using docker-compose.
Required environment variables
Service discovery configuration
These variables control how services locate each other:
| Variable | Type | Default | Description |
|---|
GRPC_HOST_RESOLVER | String | k8s | Service discovery method (k8s or host) |
GRPC_HOST_RESOLVER_HELM_CHART | String | - | Helm chart name (required when GRPC_HOST_RESOLVER=k8s) |
GRPC_HOST_RESOLVER_FIXED_IP | String | ai-platform | Fixed IP for services (required when GRPC_HOST_RESOLVER=host) |
Configuration Examples:
For Kubernetes deployment:
GRPC_HOST_RESOLVER=k8s
GRPC_HOST_RESOLVER_HELM_CHART=ai-platform
For local/Docker deployment:
GRPC_HOST_RESOLVER=host
GRPC_HOST_RESOLVER_FIXED_IP=localhost
Service port configuration
| Variable | Type | Default | Description |
|---|
SERVICE_PORT | Integer | 9100 | Port the service listens on |
Service endpoint overrides
For custom service locations:
| Variable Pattern | Example | Description |
|---|
AI_SERVICE_<service_id>_ENDPOINT | AI_SERVICE_MODELS_ENDPOINT=host.docker.internal:9104 | Override specific service endpoint |
Common Service IDs:
MODELS - AI Models service
CONVERSATIONS - Conversations service
AGENTS - Agents service
BINARIES - Binaries service
TENANTS - Tenants service
CONNECTED_GRAPH - Connected Graph service
KAG - Knowledge Graph service
Python-specific variables
| Variable | Type | Default | Description |
|---|
DEBUG | Boolean | false | Enable debug logging |
Authentication configuration
| Variable | Type | Description |
|---|
SECURITY_OAUTH2_BASE_SERVER_URL | String | OAuth2 server base URL |
SECURITY_OAUTH2_REALM | String | OAuth2 realm name |
SECURITY_OAUTH2_CLIENT_ID | String | OAuth2 client ID |
Client connection examples
Testing service connectivity
Use grpcurl to test gRPC services:
# List available services
grpcurl --plaintext localhost:9104 list
# List service methods
grpcurl --plaintext localhost:9104 list ai.flowx.ai.platform.services.internal.models.grpc.ModelsService
# Test connectivity
grpcurl --plaintext \
-d '{"message": "hello"}' \
localhost:9104 ai.flowx.ai.platform.services.internal.models.grpc.ModelsService.SayHello
Docker Compose client configuration
For clients running alongside the AI Platform stack:
version: '3.8'
services:
your-client-app:
image: your-app:latest
environment:
# Connect to AI Platform services
AI_PLATFORM_MODELS_URL: "ai-platform-models:9104"
AI_PLATFORM_CONVERSATIONS_URL: "ai-platform-conversations:9103"
AI_PLATFORM_GRAPHQL_URL: "http://ai-platform-connected-graph:9100/graphql"
# For REST services
AI_DEVELOPER_URL: "http://ai-platform-developer:9151"
AI_ANALYST_URL: "http://ai-platform-analyst:9152"
AI_DESIGNER_URL: "http://ai-platform-designer:9153"
networks:
- ai-platform-network
networks:
ai-platform-network:
external: true
Kubernetes client configuration
For clients deployed in the same Kubernetes cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
name: your-client-app
spec:
template:
spec:
containers:
- name: app
image: your-app:latest
env:
# Service discovery via Kubernetes DNS
- name: GRPC_HOST_RESOLVER
value: "k8s"
- name: GRPC_HOST_RESOLVER_HELM_CHART
value: "ai-platform"
# Direct service URLs (alternative approach)
- name: AI_PLATFORM_MODELS_URL
value: "ai-platform-ai-models.default.svc.cluster.local:9100"
- name: AI_PLATFORM_GRAPHQL_URL
value: "http://ai-platform-connected-graph.default.svc.cluster.local:9100/graphql"
Prerequisites for clients
Java clients
If integrating with Java-based services:
- Java 17 or higher
- gRPC client libraries
- Protocol Buffer definitions (available from the platform team)
Python clients
If integrating with Python-based services:
- Python 3.12.9 or higher
- HTTP client libraries (requests, httpx, etc.)
- WebSocket client for streaming (optional)
General requirements
- Network access to AI Platform services
- Valid OAuth2 credentials
- SSL/TLS certificates for production deployments
Common integration patterns
Service health checks
Before making requests, verify service availability:
# Check if gRPC service is responsive
grpcurl --plaintext ${SERVICE_HOST}:${SERVICE_PORT} grpc.health.v1.Health/Check
# Check HTTP service health (for REST services)
curl http://${SERVICE_HOST}:${SERVICE_PORT}/health
Load balancing
For production deployments, clients should:
- Use service discovery rather than hardcoded IPs
- Implement retry logic with exponential backoff
- Configure connection pooling for gRPC clients
- Monitor service health and route around unhealthy instances
Security considerations
- Always use HTTPS/TLS in production
- Rotate OAuth2 tokens regularly
- Implement proper timeout handling
- Use connection limits to prevent resource exhaustion
Troubleshooting
Connection issues
- Verify environment variables are set correctly
- Check network connectivity between client and services
- Validate OAuth2 configuration and token validity
- Review service logs for authentication/authorization errors
Service discovery problems
# For Kubernetes deployments
kubectl get services -l app=ai-platform
# Check if Helm chart is deployed
helm list -n ai-platform
# Verify DNS resolution
nslookup ai-platform-ai-models.default.svc.cluster.local
Docker network issues
# List Docker networks
docker network ls
# Inspect AI Platform network
docker network inspect ai-platform-network
# Check container connectivity
docker exec -it your-container ping ai-platform-models
Support
When requesting support, please provide:
- Environment configuration (environment variables)
- Deployment method (Docker, Kubernetes, local)
- Client application details (language, framework)
- Error messages and logs
- Network topology if using custom networking
For technical issues, include the output of:
# Service connectivity test
grpcurl --plaintext ${SERVICE_HOST}:${SERVICE_PORT} list
# Environment variables (sanitized)
env | grep -E "(GRPC_|AI_SERVICE_|SECURITY_)" | sed 's/=.*/=***/'
Last modified on December 23, 2025