Confused About AI Agent Access to Apps Like ClickUp and Slack

Hey everyone, I’m a bit confused and hoping someone can clarify.

If I want to create a Make.com AI Agent and give it access to tools like ClickUp or Slack, do I really need to create a separate scenario for each individual feature or task the app supports?

For example, ClickUp has over 50 functions (e.g. list workspaces, get a task, post a comment, list folders, etc.). Does that mean I need to build a new scenario for every single one of those just to make them accessible to the AI agent?

That seems incredibly time-consuming—easily 100+ hours just to build a decent agent. I really hope I’m misunderstanding how this works.

Would love any clarification or tips. Thanks!

Hi Mike. Welcome to the Community!

Great to hear that you’re experimenting with Make AI Agents.

The best way to approach Tools is to think of the specific tasks that you want your Agent to perform. You don’t need to implement Tools for every single app module - unless you plan that your Agent will need every single one of them.

We’ll be adding new features to Make AI Agents over the coming weeks and months, one of which might well help in this area (I can’t tell you more or our Engineering team will hunt me down!).

One extra consideration though … you don’t necessarily want your Tools to just be a single module. Here’s why:

  • Agents perform best when they’re only dealing with data that’s essential to their task. Give them too many options or too much data and they’re more likely to take the wrong path (or worse still, hallucinate) making your prompt design more and more complicated. Instead, create a Tool that only takes the input parameters needed for that task, and only returns the output that’s required for the agent to act.
  • If your Agent must do some things sequentially, then put those things in a Tool scenario. Traditional deterministic automation is great at sequential tasks - you know exactly what will happen in what order, and it’s usually faster (and cheaper) than having the Agent work it out. And again, that makes your prompt simpler.
  • Error handling is a great example of something usually want done deterministically. Errors are a fact of life for Internet-based REST APIs. It’s usually best handling these in the Tool scenario rather than leaving it up to the Agent (and again makes prompting much simpler).

Make AI Agents are designed to help you effectively blend Agentic automation and traditional deterministic automation. The best result usually comes from selecting the best of both for your specific use-case.

1 Like

Hi David,

Thanks for your quick and detailed message. I appreciate the guidance, but I’m concerned about efficiency.

I need to use over 20 features from just the ClickUp module alone, plus features from 10 more modules, each with multiple parameters. With the current approach, this setup becomes extremely time-consuming and potentially not worth the effort.

For comparison, with Claude Desktop and Cursor, I can install MCPs for GitHub, Slack, and ClickUp that give me access to over 70 features instantly. The only limitation is needing to trigger chats manually since there’s no scheduled automation.

I’m wondering why Make didn’t implement automatic mapping of module parameters to the AI agent? This approach would give users immediate access to all connections and modules in their account without extensive setup. The agent could automatically receive all inputs and outputs, with advanced settings available for users who want more control.

This would dramatically reduce implementation time while preserving all the benefits of your hybrid approach.

I understand your point that agents perform best with only essential data, but modern AI models are becoming increasingly capable of handling complex decision-making. Rather than requiring separate tools for each function, the agent could first determine which inputs and outputs it needs before sending a request.

For example, when working with ClickUp, the agent could assess the task at hand, identify the necessary parameters for a specific function, and then make a targeted API call - similar to how Claude and GPT work with tools today. This approach shifts the filtering responsibility to the agent rather than requiring the developer to create dozens of narrow scenarios.

Alternatively, Make could implement a feature that automatically generates the input/output mappings for each module function. This would preserve the benefits of your focused tool approach while dramatically reducing setup time. You could offer templates where users select which parameters to expose, with smart defaults based on common use cases.

I appreciate your points about sequential tasks and error handling - those are valid concerns. But offering some level of automated mapping would provide a better balance between development effort and agent capability, especially for users like me who need to work with dozens of functions across multiple modules.

1 Like

Interesting perspectives - thanks for sharing!

I agree that modern models are getting better at this … but in general leaving everything up to the Agent means slower execution, more cost, more uncertainty over the chosen path, more complex prompts and a lot more debugging time.

It’s a question of where the developer wants to spend time - creating scenarios or debugging prompts.

I’d also argue that simpler prompts means more maintainable agents - important if you expect others to take over the maintenance responsibility in future.

It might be a trade-off worth making when using interactive agents with Claude and GPT. But when you have an unattended agentic automation that’s running 1,000’s of times a day the trade off could well be different.

NB - My own personal take, not necessarily representative of Make’s official line.

PS - notwithstanding my take, there are plans underway that will go some way to addressing your points!

Those are excellent points, David. I appreciate your practical perspective on this. Especially for real-world trade-offs between agent autonomy and structured workflows.

I’m looking forward to seeing what your engineering team develops in the coming months. The maintainability point particularly resonates - simpler prompts mean easier handoffs and fewer “prompt engineering wizards” needed on teams.

On the micro models front, I see potential for Make to leverage smaller, specialized models in really practical ways:

  • Parameter mapping optimization: Local 3B models could efficiently handle structured input classification and routing without the token costs of larger models
  • Decision tree navigation: Small models could determine which branch of a scenario to execute based on context
  • Input validation and normalization: Preprocessing data before it reaches expensive API calls
  • Error detection and recovery: Identifying when outputs don’t match expected patterns and triggering remediation

This hybrid approach could give you the best of both worlds - specialized models for deterministic tasks and larger models only where their capabilities justify the cost.

Anyways, I will patiently wait. Thanks again for all the hard work you guys have been doing over the past couple of years.

I am building complex AI Agents right now. If you guys need some mroe YouTube videos to showcase uses cases, or if you need me to beta test new features for the agents, please reach out to me. I would love to offer my help for free.

1 Like

We’re really excited to see what our community produces with Make AI Agents.

You can do that in a number of ways:

We’re aiming to get new features out quickly - there’s no closed beta program any longer. We’re pretty much developing in public.

2 Likes