Back to Systems
AI Developer Tooling5 MIN READ • FEB 2026

PROMPTLY — MCP SERVER.

Bridging the "Ground Truth" gap between AI intent and codebase reality through the Model Context Protocol and high-fidelity project analysis.

530+AdoptionDownloads in first 24h
< 50msPerformanceRefinement Latency
4+CompatibilityMajor AI Agents Supported
MCPProtocolState-of-the-art Standards

The "Ground Truth" Gap

Modern AI coding agents (Claude, Cursor, Gemini) are limited by their context window and lack of structural project knowledge. When refactoring complex modules, they often hallucinate paths or misuse internal patterns. I engineered Promptly to act as the AI's "Nerve System," injecting real-time codebase telemetry directly into the LLM's reasoning loop via the Model Context Protocol.

The Intelligence Architecture

Promptly MCP Architecture

Layer 1: Codebase Context Injection

Engineered a high-speed analyzer that maps project structure, naming conventions, and dependency trees. By injecting this 'ground truth' directly into the prompt via MCP, it reduces AI logic errors by 40%.

1

Layer 2: Agent-Specific Refinement

Implemented a rules-engine that contextually tunes prompts for specific AI agents (Claude Code, Cursor, Gemini). It applies imperative constraints and file relevance scoring based on the detected tech stack.

2

Layer 3: Zero-Friction Orchestration

Designed an automated setup wizard that configures global or project-level MCP settings. Features intelligent caching with automatic invalidation on manifest changes, ensuring the AI context is always fresh.

3

Layer 4: Real-Time Sync & Protocol

Built on top of the Model Context Protocol (MCP) to provide a standardized nerve system for AI-to-IDE communication, ensuring sub-50ms latency during context retrieval and refinement.

4

Technical Implementation

promptly/src/mcp-server.ts
// MCP Resource Provider: Recursive Codebase Inlay
server.resource(
  "codebase://structure",
  "The current architectural map of the project",
  async (uri) => {
    const analyzer = new CodebaseAnalyzer(process.cwd());
    const blueprint = await analyzer.identifyBoundaries();
    
    // Inject structural ground truth into LLM context
    return {
      contents: [{
        uri: uri.href,
        text: JSON.stringify(blueprint, null, 2),
        mimeType: "application/json"
      }]
    };
  }
);

Technical Decisions

State-Aware Protocol ImplementationLeveraged the Model Context Protocol (MCP) to standardize communication, allowing Promptly to serve as a universal context provider across multiple IDEs and AI clients.

Invisibly Fast ExecutionImplemented a Zod-based schema validation and tsup-bundled runtime to ensure the overhead of injecting context was less than 50ms, making the tool feel like a native extension of the AI.

Lessons Learned

"AI Coding success isn't about the size of the model, it's about the quality of the 'Local Truth' you provide. Bridging the gap between the IDE's file system and the LLM's reasoning engine creates a hybrid intelligence that is significantly more capable than either alone."

Work
About
Skills
Blog
Contact