Hi Make team and community,
I’d like to raise an important limitation I’m encountering when using Make’s OpenAI / ChatGPT integration — and suggest a solution that could actually bring more value to both users and Make itself.
Context / Problem
-
Many of us are now working with advanced models (GPT-5, GPT-5 Pro, etc.) using long prompts and high reasoning effort.
-
Make’s current timeout limits for OpenAI modules are often too short for these complex scenarios.
-
As a result, high-effort executions are frequently interrupted — not because of faulty requests, but simply because the model needs more time to deliver high-quality reasoning or creative output.
-
This leads to two main issues:
-
Quality loss: We’re forced to simplify prompts, break them into smaller chunks, or downgrade reasoning to “Medium.”
-
Wasted cost: We still pay for high-priority API calls but don’t get the expected value due to early timeouts.
-
Meanwhile, when using OpenAI’s API directly, we can configure higher reasoning effort and longer response times, allowing more complete and accurate outputs. As we approach late 2025, this becomes increasingly critical for modern AI workflows.
Request / Proposal
-
Please consider increasing or making configurable the timeout duration for OpenAI / ChatGPT modules in Make.
-
Ideally, users could select their preferred reasoning/time budget per module, e.g.:
-
Medium = ~2 min
-
High = ~5 min
-
XL = ~10 min
-
-
Alternatively, grant longer timeouts automatically for high-priority or AI-intensive scenarios, so these can run without premature termination.
Why It Matters
-
As model capabilities increase, so does the complexity of prompts, logic chains, and generated content.
-
Limiting processing time restricts what’s achievable within Make and forces users to step outside the platform for serious AI workloads.
-
From a business standpoint, this constraint diminishes Make’s value proposition for advanced users who are otherwise willing to pay for higher-tier features.
“Tips Money” Idea — Monetize This Enhancement
Here’s the win-win part:
You could tie extended timeout limits to plan tiers instead of making them universal.
For example:
-
Pro plans: 1 scenario with extended timeout
-
Team plans: up to 2 scenarios
-
Enterprise plans: unlimited or on-demand timeout increases
This creates a clear upgrade incentive — advanced users (like agencies, data teams, or AI developers) would gladly move to higher plans to unlock more powerful AI execution capabilities.
Everyone wins:
-
Users get higher performance and reliability.
-
Make increases conversions and ARPU (Average Revenue per User).
-
The platform positions itself as the go-to automation hub for high-value AI workflows.
Closing Thoughts
I understand that timeout policies exist for technical and fairness reasons. But given the pace of AI evolution and the growing need for complex reasoning, I strongly believe this improvement would benefit both Make and its professional users.
Thanks for reading and for considering this suggestion — happy to share concrete examples of scenarios where extended timeouts would make a real difference.
Best regards,
Thomas,
Webedia