Stop hardcoding AI prompts in your scenarios — here's a better way

Hey Make community!

Quick question: how do you manage AI prompts across your scenarios?

I had 15+ scenarios using ChatGPT/Claude, and every time I wanted to improve a prompt, I had to:

  • Open each scenario one by one
  • Find the AI module
  • Update the prompt
  • Test everything again

It was driving me crazy. So I built a solution: xR2 — a prompt management app with a native Make module.

How it works:

  1. Store all your prompts in xR2 with variables like {customer_name}, {product}
  2. In Make, use the xR2 module → “Get Prompt” action
  3. Render variables before passing to ChatGPT/Claude — two easy ways:
    • replace() function (recommended for 1-3 variables) — chain replace() calls right in the OpenAI module content field
    • Text Parser modules (visual, no formulas) — add a Text Parser: Replace module for each variable between xR2 and OpenAI
  4. Update prompts anytime in xR2 — all scenarios pick up changes automatically

Why this is useful:

  • No more copy-paste — one prompt, many scenarios
  • Version control — test new versions without breaking production
  • A/B testing — which prompt converts better? Now you can measure it
  • Track results — did users complete the action after seeing your AI response?

Example scenario:
Webhook → Set Variables → xR2 (Get Prompt) → Text Parser (replace vars) → ChatGPT → xR2 (Track Event)

The xR2 module supports:

  • Get Prompt (fetch prompt by slug, filter by version/status)
  • Track Event (for conversion analytics)
  • Check API Key (validate credentials)

Full setup guide with screenshots: Make.com - xR2 Documentation

Links:

Would love to hear if anyone else has struggled with prompt management. Open to feedback!