Meta-Prompting Protocol An architecture of AI communication

MPP is a framework for generating self-describing, task-specific AI communication protocols on the fly, moving beyond simple prompts to create robust, reliable, and model-agnostic instructions.

Core Philosophy

From brittle prompts to resilient protocols.

Dynamic & Adaptable

Generate a new, bespoke protocol for each unique task, ensuring a perfect fit for the problem's complexity.

Self-Describing

Every message includes its own "rulebook," allowing any AI to understand and execute the task without prior knowledge.

Model Agnostic

Because the rules are transmitted with the data, any compliant AI model can act as an Executor, preventing vendor lock-in.

The MPP Workflow

A sequential process separates the roles of designing the communication and executing the task for maximum reliability.

📜

1. The Protocol Architect

An AI agent receives a high-level goal. Guided by the MPP framework, it generates a new, task-specific Derivative Protocol and encodes the user's request into a payload.

📦

2. The Self-Contained Bundle

The Architect assembles a bundle containing both the full protocol specification (the rules) and the encoded payload (the data), then transmits it.

🧠

3. The Executor

A second AI receives the bundle. It first reads the specification to learn the rules, then processes the payload with full context and clarity, returning a highly accurate result.

🧐

4. The Quality Assurance Agent (Optional)

A third AI agent validates the Executor's output against the original user's intent, using the structured data in the bundle to ensure high fidelity and correctness. If the protocol defines MCP tooling, the Executor may run those tool calls before the final response.

DSPy Adapter Interface

MPP can be consumed as a DSPy module (MPPAutoAdapter) and optimized via a DSPy teleprompter (MPPAutoAdapterOptimizer) to tune the longitudinal template before running the vertical loop.

MPP refinement loops diagram

Longitudinal refinement

The longitudinal loop wraps the vertical pipeline, requiring all QA checks to pass and mutating template blocks to improve self-contained bundle success. Use patience and a minimum delta to stop early when scores stop improving.

Vertical refinement

The vertical loop handles a single request, iterating bundle + executor refinement with QA gating (stability applies in closed-world runs).

Example: vertical only

import dspy
from mpp_dspy import MPPAutoAdapter

lm = dspy.OpenAI(model="gpt-4o-mini")
dspy.settings.configure(lm=lm)

program = MPPAutoAdapter()
result = program(user_goal="Draft a crisp product launch email.", open_world=True)
print(result.decoded_bundle)

Example: longitudinal + vertical

import dspy
from mpp_dspy import DefaultLongitudinalMutator, MPPAutoAdapter, MPPAutoAdapterOptimizer

lm = dspy.OpenAI(model="gpt-4o-mini")
dspy.settings.configure(lm=lm)

template = """[ENTRY_PROMPT]
{{MPP_MUTABLE:entry_prompt}}...{{/MPP_MUTABLE}}
"""
case = {"user_goal": "Produce a risk checklist.", "open_world": True}

program = MPPAutoAdapter()
optimizer = MPPAutoAdapterOptimizer(
    template=template,
    mutate_function=DefaultLongitudinalMutator(lm),
    longitudinal_patience=1,
    longitudinal_min_delta=0.0,
)
optimized = optimizer.compile(program, trainset=case)
result = optimized(user_goal="Draft a crisp product launch email.", open_world=True)
print(result.decoded_bundle)

Learn More

Dive into the full specification and see concrete examples of MPP compliant derivative protocols and its usage.