What is the AI Agent pricing model?

:bullseye: What is your goal?

I want to be sure what is the pricing for the AI Agent feature (and if it will be in the same in the future).

It’s very important for me, because I need to recommend an AI Agent platform to my clients and cost is a very important factor for them.

:thinking: What is the problem & what have you tried?

I’ve checked https://www.make.com/en/ai-agents and the pricing page, but I haven’t seen specific info about how AI agents are charged now and if it will change when they are not in beta.

Welcome to the Make community!

That is correct. This is because the use of Make AI modules consume credits based on the selected model and number of tokens it uses (input + output). Some modules use a time-based pricing model, like audio transcription and running code.

For more information on the calculation for each module’s usage, see

Further details regarding dynamic credit usage can be found in the Credits article.

If you need a way to count tokens within Make, see Is there a way to count tokens?

Hope this helps! If you are still having trouble, please provide more details.

@samliew

Just to add to this specifically for Make AI Agents …

As @samliew mentioned, your costs will vary based on the selected model and number of tokens used.

If you use Make’s AI Provider, then the token usage will be billed by Make through the credits mechanism.

If you use another AI Provider (eg. OpenAI or Gemini) then your token usage will be billed directly by the provider.

Regardless, token usage will be the biggest variable. There’s a whole bunch of factors that will affects token usage:

  • Adding files as Context. The bigger the files, the more tokens used to analyze them.
  • Prompt size and complexity. The bigger and more complex the prompt … you guessed it.
  • Tools - the more tools and more parameters they have, the more tokens.

For any given agent (using any agent builder - not just Make AI Agents) it’s possible to build it for efficient token usage or hugely inefficient usage.

As a general guide, here’s some do’s and don’ts for efficient token usage:

  • Don’t just dump everything your agent might need in context files. It might seem easier to “just let the LLM work it out” but that’s what costs in tokens.
  • Don’t just add a bunch of MCP servers with many, many tools and “let the LLM work it out” - for the same reason.
  • Using large context or high numbers of tools will deliver an Agent that is more prone to hallucination and almost impossible to test effectively. You’ll end up trying to fix that in the prompt, adding more complexity … and using yet more tokens.
  • DO use individual Tools to provide exactly the context your Agent needs at exactly the right time. This minimizes token usage dramatically, and usually makes for a much simpler prompt - also minimizing token usage.
2 Likes

Thanks, but I was referring more to the Make credit system (in relation with AI Agents), not the AI tokens.

How do agents consume Make credits (if I use another AI Provider, not Make AI)?

It’s 1 credit per specific action performed by the agent (like 1 credit per each connection to a tool or a web search or each time the agents “thinks”), 1 credit for the whole agent interaction or what?

It probably makes a big difference, since I guess each agent interaction has usually dozens of mini-actions, probably much more than normal workflows. So if it counts every mini-action as a credit, it could skyrocket credit usage even if I’m using another AI Provider.

That’s simple - it’s one credit per “Run an agent” operation, as detailed in the help.

All the other actions you identified are performed by the LLM, not the agent.

With one proviso … if your agent invokes Make scenarios as tools, those scenarios will also consume credits.

1 Like

OK, sounds good. An example, just to confirm: If the agent uses 5 different tools (5 different modules/connections configured in Make, such as Google Sheets, Notion or whatever) in the same “run an agent” operation, is it just 1 credit?

The agent itself will use only one credit. But the tools themselves will also use credits.

As the tool usage is non-deterministic (as directed by the LLM), credits from Make tools can’t be predicted with any accuracy and will vary widely based on input variables.

Best practice in predicting costs (for any agent builder, not just Make AI Agents) is to create a set of sample data (ideally at least 50 runs) and analyze the variation in Make credit and LLM token usage. That should give you a good idea of your likely upper and lower extremes of usage.