How to use Ollama on my computer to drive make.com database queries?

:bullseye: What is your goal?

I want to create a workflow with make.com that employs AI to parse database entries, and generate summaries. I do not want to use commercial AI from the net. I wish to use locally hosted AI (Ollama) to provide the backend processing of database entries. Can this be done?

:thinking: What is the problem?

Need private, locally hosted AI for security reasons…and general mistrust of commercial AI.

:test_tube: What have you tried so far?

So far, I have just started with make.com. All I have done so far is to create 5 linked databases. Have not yet populated them with trial data, nor tried connecting to a local AI endpoint. Been at this all day, with make.com being the last step to integrate parsing notes/projects/ etc into an auto generated to do / project manager.

:link: Create public scenario page

Don’t have one

Hello,

Welcome to the Community!

Yes, it is possible, but for details you should visit Stack Overflow or DevOps communities.

Your problem focuses on network and server configuration as you must enable communication to the internet to allow Make to connect with your local LLM.

I’m not sure if we will be able to suggest the most suitable config for your needs and possibilities here, as there are plenty of solutions - Cloudflare tunnels, ngrok, firewalls, proxies, and so on. But maybe we have here network and devops experts as well :slight_smile:

If you are a non-technical person, consider hiring someone to do it for you, as you will have to open your network and server to the outside world, and without proper knowledge it could be potentially risky.

Ok, that sounds great. I was just curious if it was even possible. I am schooling myself up to BE a devop (I am far short of expert! Been studying for just at a year now). I have MSTY Studio as my AI chat interface, on my mac, with Ollama models running on a linux machine with a 24G graphics card. So, I already have the exposed port, between these two computers, all I should need to do is figure out a proper API that can bridge between MSTY and Make. Seems straightforward… (easy to say at the end of the day, but I will be giving it a shot tomorrow or next day). Cheers!

Sure, you can setup a flask or whip up something with fastAPI on the linux machine so you can talk to the AI over it and then call it with a generic HTTP module from the Make scenario.

Once you have a generic connection going, you can look to improve the API as needed, but better to start small and get it to work first.

I am working through the agent course in Make academy. Came across this line: ā€œYour own connection:
You can also connect your AI agent to your own AI provider. This option is only available on the Pro plan and above. Here you can find a list of AI providers you can choose from:ā€ It is unclear whether this ā€˜only available on the Pro plan and above’ is referring to just this academy course, or does Make.com not allow me to use my own AI solution on the free plan? This is a deal breaker if so. Please tell me if I have to go pro just to use my own local LLM. If so, I will keep searching for a more reasonable solution to creating agents.

You can absolutely call your own agent using a generic HTTP call on the free (or on any other) plan within a Make scenario.

You cannot setup a Make AI agent using your own AI agent on a free plan.

image

These are two very different things though and from my understanding of your initial post - you want to connect with your agent via an API call inside a Make scenario. And this is absolutely possible on the free plan, yes.

You have not mentioned that you are willing to use AI Agents, just to connect Make to local LLM. It can be done using an HTTP call.

And for custom AI providers in AI Agents App, yes, you have to be on a paid plan to do that.

First - thank you all very much for engaging and replying, this is very much appreciated. I am confused by what you mean as an ā€˜agent’ then. Seems like the entire make.com platform is one big agentic resource base. My project is to take a collection of online apps, and wire them together via make.com. The flow in essence is this: 1) I type a thought into slack. 2) At some point, make queries my slack channel, ingests the thought. 3) Make sends the thought, along with a classification prompt to my ollama LLM, which then parses the thought, returns structured JSON back Make. 4) Make then takes that structured JSON and contacts ā€˜Notion’ and stores that thought in an approriate database there. Here is the basic template I am working from: The Core Loop

1. 2. 3. 4. 5. 6. You capture a thought in Slack (5 seconds)

Zapier sends it to Claude/ChatGPT for classification

The AI returns structured JSON with category, fields, and confidence

Zapier routes it to the correct Notion database

Zapier replies in Slack confirming what it did

Daily/weekly digests surface what matters

The Three Automations

ā— Capture Zap: Slack message → AI classification → Notion filing → Slack confirmation

ā— Fix Zap: Slack reply containing ā€œfixā€ → Parse correction → Update Notion record

ā— Digest Zap (x2): Scheduled trigger → Query Notion→ AI summarization → Slack DM

I am attempting to use Make instead of Zapier, ollama instead of ChatGPT.

To my eye, Make is purely an agentic platform, so saying I cannot use ā€˜agents’ on the free plan, means there isn’t any ā€˜free’ plan. I don’t need this to run constantly, just a few times a day. Not sure that Make is the correct solution here. It seems I have to completely avoid the AI Agents App in Make, which frankly seems like an oxymoron.

Yeah the terminology is a bit confusing. Make has built in agents, these ones you can’t use on the free plan.

Your flow is entirely possible to build on the free plan without any issues.

A Slack Watch module triggers when a new message is sent → a generic HTTP call module forwards it to your AI and receives the JSON response → a Notion module updates (or creates) the database item and lastly a Slack Send message module responds when its all done.

BUT you can have only 2 active scenarios on the free plan and you can only consume 1000 credits a month, with each module consuming 1 every time it runs. So you will hit a limit there. So I suggest you combine the first two automations in one, since both get triggered by a slack watch module.

No. Make.com is an integration and workflow automation platform, but its usage isn’t limited to typical ā€œintegrationā€ use cases.
I’ve built a few projects where Make.com serves as a standalone backend for internal apps- e.g. handling business logic, approvals, notifications, and writing/reading data across multiple tools.

As we can read on Make’s website:

Make is the leading integration and automation development platform which empowers businesses across all verticals to visualize systems, streamline processes and put AI to work – and ultimately realize their full potential.

Source

Usage of Make is IMO pretty much limitless. :slight_smile:

Agents, on the other hand, are just one function of the platform.
They use AI to simplify workflows and delegate decision-making (within defined boundaries), so the scenario can choose the next step based on context- instead of relying only on rigid if/else paths and ā€œhardcoded logicā€.

You can learn more about Agents here: https://www.make.com/en/ai-agents

As a summary: Make.com is a platform offering many possibilities. AI agents are just one of the features.