PROMPTLY — MCP SERVER.
Bridging the "Ground Truth" gap between AI intent and codebase reality through the Model Context Protocol and high-fidelity project analysis.
The "Ground Truth" Gap
The Intelligence Architecture
Layer 1: Codebase Context Injection
Engineered a high-speed analyzer that maps project structure, naming conventions, and dependency trees. By injecting this 'ground truth' directly into the prompt via MCP, it reduces AI logic errors by 40%.
Layer 2: Agent-Specific Refinement
Implemented a rules-engine that contextually tunes prompts for specific AI agents (Claude Code, Cursor, Gemini). It applies imperative constraints and file relevance scoring based on the detected tech stack.
Layer 3: Zero-Friction Orchestration
Designed an automated setup wizard that configures global or project-level MCP settings. Features intelligent caching with automatic invalidation on manifest changes, ensuring the AI context is always fresh.
Layer 4: Real-Time Sync & Protocol
Built on top of the Model Context Protocol (MCP) to provide a standardized nerve system for AI-to-IDE communication, ensuring sub-50ms latency during context retrieval and refinement.
Technical Implementation
// MCP Resource Provider: Recursive Codebase Inlay
server.resource(
"codebase://structure",
"The current architectural map of the project",
async (uri) => {
const analyzer = new CodebaseAnalyzer(process.cwd());
const blueprint = await analyzer.identifyBoundaries();
// Inject structural ground truth into LLM context
return {
contents: [{
uri: uri.href,
text: JSON.stringify(blueprint, null, 2),
mimeType: "application/json"
}]
};
}
);Technical Decisions
State-Aware Protocol ImplementationLeveraged the Model Context Protocol (MCP) to standardize communication, allowing Promptly to serve as a universal context provider across multiple IDEs and AI clients.
Invisibly Fast ExecutionImplemented a Zod-based schema validation and tsup-bundled runtime to ensure the overhead of injecting context was less than 50ms, making the tool feel like a native extension of the AI.
Lessons Learned
"AI Coding success isn't about the size of the model, it's about the quality of the 'Local Truth' you provide. Bridging the gap between the IDE's file system and the LLM's reasoning engine creates a hybrid intelligence that is significantly more capable than either alone."