AUTONOMA — DIGITAL FTE.
An open-source AI agent platform designed to build, deploy, and run digital employees. Bridging the gap between reactive chatbots and proactive, memory-augmented digital operators.
The Observability Gap
The Intelligence Framework
Layer 1: Omni-Channel Gateway
Natively supports Telegram, Discord, WhatsApp, and Gmail via a robust event-driven router. Enables a multi-session paradigm where a single agent maintains contextual continuity across all endpoints.
Layer 2: Cortex & Memory Engine
Built on SQLite with FTS5. Implements BM25 ranked retrieval to fetch context dynamically. Memories decay in importance over time, keeping the agent's context window optimized and highly relevant.
Layer 3: The Tool Execution Sandbox
Agents don't just chat—they act. The execution layer exposes a sandboxed environment for web search, file orchestration, and isolated shell commands, all driven by LLM intent mapping.
Layer 4: Telemetry & HUD Triage
A premium React 19 + Vite frontend provides a live dashboard for real-time monitoring. Traces execution latency via Gantt charts and allows direct interventions via a high-fidelity Neural Registry.
Technical Implementation
# Multi-Agent Orchestration & Planning Loop
class CognitiveEngine:
async def execute_mission(self, objective: str):
# 1. Plan: Decompose high-level goal into atomic tasks
mission_steps = await self.planner.synthesize(objective)
for task in mission_steps:
# 2. Act: Select and execute tool (Web, SQL, Shell)
observation = await self.executor.run(task)
# 3. Reflect: Update memory and refine future steps
self.memory.append(task, observation)
await self.refiner.integrate(objective, observation)
# 4. Telemetry: Stream status to React HUD
await self.telemetry.emit(task.status)Technical Decisions
SQLite FTS5 + BM25Opted against heavy vector databases. Using SQLite FTS5 with BM25 indexing guarantees an ultra-lightweight deployment while matching semantic search relevance for operational queries.
Decoupled UI LayerThe execution loop (Python) streams events to the dashboard (React 19) via WebSockets. It prevents the UI from blocking the LLM generation loop and scales effortlessly.
The Reality Check
"Agents degrade over time without strong state management. Most open-source toys fail because their context window fills with garbage. Building Autonoma proved that a robust memory architecture with algorithmic decay and Jaccard deduplication is 10x more important than the choice of foundation model."