Botmaton is built on a three-layer architecture where each layer has distinct responsibilities. The Core provides the foundation, Tools deliver modular functionality, and the Interactive Facility Map serves as the primary operational view accessed via the 7th navigation button.
The complete database schema supports all platform functionality with proper foreign key relationships, audit trails, and soft deletes. Every table includes created_at, updated_at, created_by, and updated_by columns for full traceability.
| Domain | Tables | Key Tables |
|---|---|---|
| Core Platform | 8 | users, roles, sessions, audit_log, notifications, documents, system_config, translations |
| Facility & Spatial | 4 | facilities, buildings, areas, spatial_metadata |
| Equipment | 3 | equipment, components, equipment_documents |
| Maintenance Manager | 7 | work_orders, pm_schedules, pm_tasks, parts, parts_equipment, inventory_transactions, downtime_events |
| LOTO Builder | 4 | loto_procedures, loto_energy_sources, loto_isolation_steps, loto_verifications |
| Compliance Tracker | 5 | regulatory_requirements, inspections, training_certifications, incidents, incident_actions |
| Report Builder | 3 | report_templates, scheduled_reports, report_history |
| Import/Migration | 2 | import_jobs, import_field_mappings |
Each tool is independently deployable and follows a modular architecture. Tools are sold in tiered bundles or individually (Γ la carte). All tools plug into the Core platform and share the central database schema.
Aura is not just a voice assistant β she is the facility operations AI with full platform access, backend control, issue resolution capability, AND serves as the training/onboarding system. There is no separate guided tour; Aura teaches users how to use every tool.
Botmaton is designed for industrial environments where downtime directly impacts safety and operations. The redundancy architecture ensures Aura AI and critical facility data remain available through a multi-tier failover strategy.
The primary AI backend is a locally-hosted Ollama instance running on the facility's own hardware. This provides zero-latency responses, full data privacy, and complete offline capability.
If the local Ollama instance becomes unavailable, Botmaton API automatically fails over to a remote Ollama instance β a secondary on-premise server, company data center, or VPS.
If both local and remote Ollama instances are unavailable, Botmaton can route Aura queries to a cloud LLM API (OpenAI, Anthropic, etc.). Sensitive facility data can be filtered before sending.
When all AI backends are unreachable, Aura falls back to the built-in static response engine β findBestMatch() pattern matching. No AI generation, but the platform remains functional.
Aura's voice output: ElevenLabs (premium cloud) β Piper TTS (local Docker) β Web Speech API (browser). Audio-reactive visuals work with all three.
Speech-to-text: Whisper.cpp (local GPU) β Whisper via Ollama β Web Speech Recognition API. Wake word detection runs independently.
PostgreSQL (36 tables) with automated daily backups, WAL archiving, and optional streaming replication. Redis (optional) for session caching with RDB + AOF persistence.
Every service includes health checks. Telegraf monitors containers. N8N triggers alerts when services degrade. Docker Compose restart policies handle transient failures.
Botmaton is offline-first. Frontend works without network. Aura degrades gracefully through failover tiers. Queued actions sync on reconnection.
Isolated Docker networks. TLS via nginx. JWT authentication with RBAC (Admin, Manager, Technician, Viewer). Ollama and PostgreSQL never exposed publicly.
The Botmaton API implements a unified failover manager that routes requests through the tier chain. Each tier has configurable timeout, retry count, and health thresholds.
Mermaid renders diagrams from text definitions. Write as markdown, render to SVG/PNG automatically. Integrates with GitHub and can be embedded in docx via image conversion.
N8N workflow watches for changes to docker-compose.yml or MCP configs, triggers diagram regeneration using Mermaid CLI or D3.js, drops updated SVG/PNG into output folder.
For publication-quality figures, use D3.js or raw SVG generation. A Node.js script reads Docker and MCP configs, generates styled SVGs matching Botmaton's visual language.
During documentation sessions, Claude can use Filesystem MCP to read docker-compose.yml, extract running services, and generate an up-to-date architecture diagram as HTML or SVG.
A script walks the actual filesystem and generates a styled folder tree image. Run before each documentation session to ensure folder trees are always accurate.
Combine Claude's on-demand generation with N8N scheduled updates. Architecture diagrams update automatically when configs change, folder trees regenerate before each writing session.