Technical FAQ

Technical Implementation

Technical questions about AgenixChat implementation, integration, architecture, and customization

What technology stack does AgenixChat use?

AgenixChat is built on a modern, production-ready stack: Frontend: Next.js 15 with React 19, TypeScript 5, and Tailwind CSS for styling. Backend: Next.js API routes with PostgreSQL 12+ database and Prisma 6 ORM for type-safe database access. Authentication: NextAuth.js 4 with JWT-based sessions, supporting SSO via SAML 2.0 and OAuth 2.0. AI Integration: Flexible connector supporting chatflows, workflows, and autonomous agents with both blocking and streaming response modes. The entire stack is containerized with Docker for consistent deployments across environments.

How does AgenixChat integrate with existing systems?

AgenixChat provides multiple integration patterns: REST API (v1): Full-featured API for programmatic access to all platform capabilities. MCP Server: Model Context Protocol server for AI service integration. Webhooks: Real-time event notifications for conversation events, user actions, and system events. SSO Integration: SAML 2.0, OAuth 2.0, LDAP support for enterprise identity providers (Okta, Azure AD, Google Workspace). Data Connectors: Pre-built connectors for popular platforms (Salesforce, Shopify, Snowflake, Databricks) with custom connector development available. All integrations support encrypted communication and API key management.

What databases are supported?

AgenixChat uses PostgreSQL 12+ as its primary database, chosen for reliability, ACID compliance, and excellent performance with complex queries. The database schema is managed via Prisma ORM, providing type safety and automated migrations. For enterprise deployments, we support: Managed PostgreSQL: AWS RDS, Google Cloud SQL, Azure Database for PostgreSQL. Self-hosted: PostgreSQL on your infrastructure with replication support. High Availability: Primary-replica setup with automatic failover. Backup & Recovery: Automated daily backups with point-in-time recovery. The multi-tenant architecture ensures complete data isolation at the database level.

How does the AI integration work?

AgenixChat integrates with your AI service through a flexible connector architecture: AI Service Connector: Connects to your in-house AI service or third-party AI platform. Supported Modes: Chatflows (conversational), Workflows (multi-step processes), Autonomous Agents (goal-driven). Response Types: Blocking (wait for complete response) and Streaming (real-time token streaming). Authentication: Secure API key management with encryption at rest. Message Routing: Intelligent routing based on space configuration and user context. Conversation History: Automatic context management with configurable retention. Error Handling: Graceful degradation with retry logic and fallback responses. The connector is extensible, allowing custom AI service integration.

Can we use our own AI models?

Yes! AgenixChat is designed to work with your AI infrastructure. You can integrate: Your AI Service: Connect to your in-house AI platform or API. Custom Models: Use your fine-tuned models, proprietary algorithms, or specialized AI services. Multiple Models: Configure different models per space or use case. Model Switching: Dynamically route to different models based on query type or user role. Hybrid Approach: Combine multiple AI services (e.g., GPT-4 for general queries, custom model for domain-specific questions). The platform is AI-agnostic and doesn't lock you into any specific vendor or model. You maintain complete control over your AI infrastructure and model selection.

What APIs are available?

AgenixChat provides a comprehensive REST API (v1) with the following capabilities: Conversation Management: Create, read, update conversations; send messages; retrieve history. Space Management: CRUD operations for spaces, configure AI workflows, manage permissions. User Management: User CRUD, role assignment, authentication token management. Analytics: Usage metrics, conversation stats, user activity, space performance. Configuration: System settings, AI service configuration, branding customization. Webhooks: Subscribe to real-time events (new messages, user actions, system events). All API endpoints are authenticated via API keys or JWT tokens, support rate limiting, and return JSON responses. Full OpenAPI/Swagger documentation is provided.

How is data stored and processed?

Data storage and processing in AgenixChat follows enterprise security best practices: Data Storage: All data is stored in PostgreSQL with encryption at rest. User passwords are hashed with bcrypt (never stored in plain text). API keys are encrypted before storage. Data Processing: Messages are processed in-memory and immediately persisted to the database. AI service communication uses encrypted HTTPS connections. No data is sent to third parties without explicit configuration. Data Retention: Configurable retention policies per space. Automatic cleanup of old conversations based on policy. Soft deletes with audit trail for compliance. Data Isolation: Multi-tenant architecture with database-level isolation. Each space has separate data boundaries. Cross-tenant data access is prevented at the application and database levels.

What are the system requirements?

System requirements vary by deployment tier: Startup/Trial: 2-4 CPU cores, 4-8 GB RAM, 20-50 GB SSD storage. Suitable for POCs, small teams (5-10 users), up to 10K messages/month. Professional: 8-16 CPU cores, 16-32 GB RAM, 100-250 GB SSD storage. Suitable for departments (50-100 users), up to 100K messages/month. Enterprise: 16-32 CPU cores, 64-128 GB RAM, 500 GB - 1 TB SSD storage. Suitable for large enterprises (500+ users), unlimited messages, data-intensive workloads. Enterprise Plus: 32+ CPU cores, 128+ GB RAM, 1+ TB NVMe SSD storage. Suitable for mission-critical workloads, data lakes, millions of records. All tiers require PostgreSQL 12+, Node.js 18+, and modern browsers for the web interface.

How does multi-tenancy work?

AgenixChat implements database-level multi-tenancy for maximum security and isolation: Tenant Isolation: Each space (tenant) has a unique identifier used to partition data. All database queries are automatically scoped to the tenant context. Cross-tenant data access is prevented at the ORM level. Resource Isolation: Separate AI service endpoints per space. Independent rate limits and quotas per tenant. Isolated API keys and authentication tokens. Scaling: Each tenant can scale independently based on usage. Resource quotas prevent one tenant from impacting others. Load balancing distributes requests across instances. Data Privacy: Complete data separation between tenants. No shared data structures or caches. Audit logs track all cross-tenant access attempts (which are blocked).

Can we customize the platform?

Yes, AgenixChat offers extensive customization options: Branding: Custom logo, colors, fonts, and styling. White-label deployment option for Enterprise Plus. Workflows: Configure custom AI workflows per space. Define multi-step processes and decision trees. Integrations: Custom connector development for proprietary systems. API-first architecture enables custom integrations. UI Customization: Modify chat interface components. Add custom fields and metadata. Business Logic: Custom validation rules and data processing. Webhook-based automation for custom workflows. Professional Services: Our team can develop custom features, integrations, and modifications specific to your requirements. Source code access available for Enterprise Plus customers.

Related Resources

Have More Questions?

Our technical team is here to help with your specific requirements

Contact Technical Team