This guide provides essential information for clients who need to configure and connect to the FlowX AI Platform services.

Service endpoints

Java services

Service NameDefault PortProtocolPurpose
Connected Graph9100GraphQLKnowledge graph queries
Agents9101gRPCAgent management
Binaries9102gRPCFile storage
Conversations9103gRPCConversation management
Models9104gRPCAI model configuration
Tenants9105gRPCMulti-tenant management
Knowledge Graph (KAG)9106gRPCKnowledge ingestion

Python services

Service NameDefault PortProtocolPurpose
Planner9150gRPCTask orchestration
AI Developer9151RESTCode generation
AI Analyst9152RESTProcess analysis
AI Designer9153RESTUI generation
Production Note: In production deployments, all services run on port 9100. The ports listed above are for local development using docker-compose.

Required environment variables

Service discovery configuration

These variables control how services locate each other:
VariableTypeDefaultDescription
GRPC_HOST_RESOLVERStringk8sService discovery method (k8s or host)
GRPC_HOST_RESOLVER_HELM_CHARTString-Helm chart name (required when GRPC_HOST_RESOLVER=k8s)
GRPC_HOST_RESOLVER_FIXED_IPStringai-platformFixed IP for services (required when GRPC_HOST_RESOLVER=host)
Configuration Examples: For Kubernetes deployment:
GRPC_HOST_RESOLVER=k8s
GRPC_HOST_RESOLVER_HELM_CHART=ai-platform
For local/Docker deployment:
GRPC_HOST_RESOLVER=host
GRPC_HOST_RESOLVER_FIXED_IP=localhost

Service port configuration

VariableTypeDefaultDescription
SERVICE_PORTInteger9100Port the service listens on

Service endpoint overrides

For custom service locations:
Variable PatternExampleDescription
AI_SERVICE_<service_id>_ENDPOINTAI_SERVICE_MODELS_ENDPOINT=host.docker.internal:9104Override specific service endpoint
Common Service IDs:
  • MODELS - AI Models service
  • CONVERSATIONS - Conversations service
  • AGENTS - Agents service
  • BINARIES - Binaries service
  • TENANTS - Tenants service
  • CONNECTED_GRAPH - Connected Graph service
  • KAG - Knowledge Graph service

Python-specific variables

VariableTypeDefaultDescription
DEBUGBooleanfalseEnable debug logging

Authentication configuration

VariableTypeDescription
SECURITY_OAUTH2_BASE_SERVER_URLStringOAuth2 server base URL
SECURITY_OAUTH2_REALMStringOAuth2 realm name
SECURITY_OAUTH2_CLIENT_IDStringOAuth2 client ID

Client connection examples

Testing service connectivity

Use grpcurl to test gRPC services:
# List available services
grpcurl --plaintext localhost:9104 list

# List service methods
grpcurl --plaintext localhost:9104 list ai.flowx.ai.platform.services.internal.models.grpc.ModelsService

# Test connectivity
grpcurl --plaintext \
    -d '{"message": "hello"}' \
    localhost:9104 ai.flowx.ai.platform.services.internal.models.grpc.ModelsService.SayHello

Docker Compose client configuration

For clients running alongside the AI Platform stack:
version: '3.8'
services:
  your-client-app:
    image: your-app:latest
    environment:
      # Connect to AI Platform services
      AI_PLATFORM_MODELS_URL: "ai-platform-models:9104"
      AI_PLATFORM_CONVERSATIONS_URL: "ai-platform-conversations:9103"
      AI_PLATFORM_GRAPHQL_URL: "http://ai-platform-connected-graph:9100/graphql"
      
      # For REST services
      AI_DEVELOPER_URL: "http://ai-platform-developer:9151"
      AI_ANALYST_URL: "http://ai-platform-analyst:9152"
      AI_DESIGNER_URL: "http://ai-platform-designer:9153"
    networks:
      - ai-platform-network

networks:
  ai-platform-network:
    external: true

Kubernetes client configuration

For clients deployed in the same Kubernetes cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: your-client-app
spec:
  template:
    spec:
      containers:
      - name: app
        image: your-app:latest
        env:
        # Service discovery via Kubernetes DNS
        - name: GRPC_HOST_RESOLVER
          value: "k8s"
        - name: GRPC_HOST_RESOLVER_HELM_CHART
          value: "ai-platform"
        
        # Direct service URLs (alternative approach)
        - name: AI_PLATFORM_MODELS_URL
          value: "ai-platform-ai-models.default.svc.cluster.local:9100"
        - name: AI_PLATFORM_GRAPHQL_URL
          value: "http://ai-platform-connected-graph.default.svc.cluster.local:9100/graphql"

Prerequisites for clients

Java clients

If integrating with Java-based services:
  • Java 17 or higher
  • gRPC client libraries
  • Protocol Buffer definitions (available from the platform team)

Python clients

If integrating with Python-based services:
  • Python 3.12.9 or higher
  • HTTP client libraries (requests, httpx, etc.)
  • WebSocket client for streaming (optional)

General requirements

  • Network access to AI Platform services
  • Valid OAuth2 credentials
  • SSL/TLS certificates for production deployments

Common integration patterns

Service health checks

Before making requests, verify service availability:
# Check if gRPC service is responsive
grpcurl --plaintext ${SERVICE_HOST}:${SERVICE_PORT} grpc.health.v1.Health/Check

# Check HTTP service health (for REST services)
curl http://${SERVICE_HOST}:${SERVICE_PORT}/health

Load balancing

For production deployments, clients should:
  1. Use service discovery rather than hardcoded IPs
  2. Implement retry logic with exponential backoff
  3. Configure connection pooling for gRPC clients
  4. Monitor service health and route around unhealthy instances

Security considerations

  1. Always use HTTPS/TLS in production
  2. Rotate OAuth2 tokens regularly
  3. Implement proper timeout handling
  4. Use connection limits to prevent resource exhaustion

Troubleshooting

Connection issues

  1. Verify environment variables are set correctly
  2. Check network connectivity between client and services
  3. Validate OAuth2 configuration and token validity
  4. Review service logs for authentication/authorization errors

Service discovery problems

# For Kubernetes deployments
kubectl get services -l app=ai-platform

# Check if Helm chart is deployed
helm list -n ai-platform

# Verify DNS resolution
nslookup ai-platform-ai-models.default.svc.cluster.local

Docker network issues

# List Docker networks
docker network ls

# Inspect AI Platform network
docker network inspect ai-platform-network

# Check container connectivity
docker exec -it your-container ping ai-platform-models

Support

When requesting support, please provide:
  1. Environment configuration (environment variables)
  2. Deployment method (Docker, Kubernetes, local)
  3. Client application details (language, framework)
  4. Error messages and logs
  5. Network topology if using custom networking
For technical issues, include the output of:
# Service connectivity test
grpcurl --plaintext ${SERVICE_HOST}:${SERVICE_PORT} list

# Environment variables (sanitized)
env | grep -E "(GRPC_|AI_SERVICE_|SECURITY_)" | sed 's/=.*/=***/'