What is the AI Agent pricing model?

Just to add to this specifically for Make AI Agents …

As @samliew mentioned, your costs will vary based on the selected model and number of tokens used.

If you use Make’s AI Provider, then the token usage will be billed by Make through the credits mechanism.

If you use another AI Provider (eg. OpenAI or Gemini) then your token usage will be billed directly by the provider.

Regardless, token usage will be the biggest variable. There’s a whole bunch of factors that will affects token usage:

  • Adding files as Context. The bigger the files, the more tokens used to analyze them.
  • Prompt size and complexity. The bigger and more complex the prompt … you guessed it.
  • Tools - the more tools and more parameters they have, the more tokens.

For any given agent (using any agent builder - not just Make AI Agents) it’s possible to build it for efficient token usage or hugely inefficient usage.

As a general guide, here’s some do’s and don’ts for efficient token usage:

  • Don’t just dump everything your agent might need in context files. It might seem easier to “just let the LLM work it out” but that’s what costs in tokens.
  • Don’t just add a bunch of MCP servers with many, many tools and “let the LLM work it out” - for the same reason.
  • Using large context or high numbers of tools will deliver an Agent that is more prone to hallucination and almost impossible to test effectively. You’ll end up trying to fix that in the prompt, adding more complexity … and using yet more tokens.
  • DO use individual Tools to provide exactly the context your Agent needs at exactly the right time. This minimizes token usage dramatically, and usually makes for a much simpler prompt - also minimizing token usage.
2 Likes