🎥 Recording & Recap: Product Team Spotlight - All About Agents & Sneak Peek

Hey Makers :waving_hand:

A huge thank-you to everyone who joined our Community Live Session: Product Team Spotlight - All About Agents & Sneak Peek!

:rocket: As always, it was fun, inspiring, and most of all, practical and useful! And for all of you who didn’t manage to catch it – as promised, here is the recording of the session, and all your questions from the session answered below.

:movie_camera: Full recording

:man_raising_hand: Q&A from the session

:purple_circle: Q1: Is it possible for Agents to learn from historical messages or conversations, meaning that they can use past messages as context or training data for future interactions? Just like if you use Gemini or ChatGPT in the webapp or app on your pc.

A: Yes. You can create a custom tool in Make for an agent to save observations / user preferences / etc., and then inject this file into the Agent’s Knowledge for every run.

Each Agent also has “Thread IDs”, which bind the conversation together, to make sure the agent doesn’t lose track of the previous messages within a single thread.

:purple_circle: Q2: What was the name of the LLM other than Chatgpt and Anthropic?

A: There are many other LLM Providers, such as Gemini, Mistral, Grok, and many more. They differ in speed, quality and price. Make supports all of these out of the box, with the option to also try out many other ones.

:purple_circle: Q3: Could you detail some examples for HR?

A: We will have an easy copy-paste Job fit assessment and Interview Prep Agent in our agent library – so stay tuned! (to be released soon with the new agents).

:purple_circle: Q4: Can’t you have the agent verify it’s own results?

A: Yes, definitely – it is a common use-case where you use an agent to verify the output from the previous agent. LLM as a judge is commonly used for light-weight evals.

:purple_circle: Q5: I see Agents in a scenario as a smart Router that needs instructions and routes to tools that are available once the task is defined (using Context). Is it correct to assume that a scenario that contains many Routers may benefit from an Agent?

A: Yes. Especially if your scenario already leverages a lot of AI, using an Agent can cut down on a lot of setup, especially if the steps are repetitive and duplicated within the scenario.

:purple_circle: Q6: Do we expect agents to eventually become good enough that using them with structured data makes sense?

A: Structured data as inputs are actually even better than unstructured, when passing data into an agent. The agent also receives the interface definitions of its tools, meaning it can format its data into the expected structure when interacting with its tools.

As per Output of the Agent, the Agent has an option to define a Data Structure, which guarantees that it always returns the answer in the expected format.

You can then map this information into following modules.

:purple_circle: Q7: Quick question to confirm my understanding. So you can use AI agent to create your own chat bots based on your business information and content rules to communicate with your consumers FAQ?

A: Yes. You would upload your business information, FAQ into our Knowledge app, that the Agent will use to get the relevant information for each request.

You can also upload past customer tickets (cleared of personal data), that the agent can search for past cases, that could satisfy the user request.

:purple_circle: Q8: I started to integrate a AI with my scenario, but you replied to me that I must be verified to start. How long does it take the verification process after I finished your requirements?

A: I don’t exactly know what this verification is about. If it’s the verification in your OpenAI Account, it usually takes just couple of minutes :slight_smile: .

:purple_circle: Q9: Which are the must have-points in the instruction in an agent prompt?

A: Usually you would write the role the Agent should take on (“You are an HR Manager”), how he’s supposed to act (“Act friendly and funny”), what are its rules (“Dont give out any discounts”), configuration (“Alwys answer in Spanish”) and guidelines on how to use its tools.

“If the user creates an order, put his name, email and order into a google sheet” (provided you have a tool called “add data to Google Sheet”), and similar :slight_smile:

:purple_circle: Q10: Another question :slight_smile: is your platform built on Gemini, open AI , or what exactly?

A: Our platform is LLM-Provider agnostic. We support a multitude of providers, e.g. openAI, Gemini, Claude, Grok, Mistral and more. Additionally, you can bring a lot of open source models, such as DeepSeek. We constantly add native support of more LLM Providers :slight_smile: .

:purple_circle: Q11: Regarding RAG, what are the advantages of using Pinecone or instead Open AI vectorisation?

A: Mostly customization. Using Pinecone is a rather “bare-metal” implementation, which allows for most customization, but is the hardest to set up. Using SaaS solutions, such as our Make Knowledge, lets you deal with the whole RAG topic within just a minute.

:purple_circle: Q12: I would like to build an agent that takes information from a spreadsheet filled with about 10000 LLM prompts. I would like it to test the prompts and add the outcome in specific column of the spreadsheet with commet of the outcome. Is this doable?

A: Yes, definitely. You would read each prompt from the Google Sheet, map it into the agent, and then map the agent response to the specific column in the Google Sheet.

Just to mention, 10_000 prompts will not be the cheapest workflow, so make sure to test with smaller chunks of the dataset, and use an appropriately priced LLM model :slight_smile:

:purple_circle: Q13: Couldn’t you just do this with a deep research prompt?

A: The first specific presented case, with one web search, yes. The power of the agent is that it’s iterative, and that it can get data from various sources, and do a lot of actions, through its tools.

:purple_circle: Q14: Does the Web-Agent-Module replace web-scrapers?

A: Not really. A Web-Agent module (Make web search) (LLM + web search/browsing) is great for getting a one-off answer (find a few relevant pages, read them, summarize/reason). A web-scraper (Apify/Firecrawl) is for repeatable, scalable data extraction (crawl many URLs, handle site structure/JS, export structured datasets).

:purple_circle: Q15: Can you give the agent a docx as an input?

A: You can give him multiple image input formats, and PDF. Support for more text-based inputs, such as DOCX or TXT will come in the future. The agent can also output the information in desired formats, including DOCX, CSV, PDF and TXT.

:purple_circle: Q16: Why do you need to use firecrawl?

A: You can choose to use any services you prefer. Firecrawl is being used here mostly as a preference and example.

:purple_circle: Q17: Why do my agents not have the Tool and Knowledge add-on’s?

A: The Agent on the screen is the second version of our agent, being released at the beginning of February. The current agents, that are live, work a bit differently and do not have these more visual features.

:purple_circle: Q18: Why when I have opened my Make AI Agents it’s something else of what you showed. I have Tools, MCP Thread Id…

A: The Agent on the screen is the second version of our agent, being released at the beginning of February. The current agents, that are live, work a bit differently and do not have these more visual features.

:purple_circle: Q19: Is this an unreleased version of the AI Agent module? The one I see does not look like this.

A: The Agent on the screen is the second version of our agent, being released at the beginning of February. The current agents, that are live, work a bit differently and do not have these more visual features.

:purple_circle: Q20: Is the add Knowledge function always available? I don’t see it in my environment?

A: It will be available publicly soon – what you have seen was a preview of an upcoming release.

:purple_circle: Q21: Is this feature available to test on the free plan? (make ai agent)

A: It will be available publicly soon – what you have seen was a preview of an upcoming release.

:purple_circle: Q22: Can you also add the spreadsheet module after the AI agent and map the output as a variable?

A: Yes you can – agent output is a mappable field.

:purple_circle: Q23: Why in the example you show you use two web search tools?

A: So if you would take it further you would use “Scrape website” - Extract content from URLs (Firecrawl integration) and then “Search web" - Find additional market intelligence (Make AI Web Search)

:purple_circle: Q24: If I already built a Make agent last year, would it automatically change to this new format?

A: No unfortunately that is not possible due to technical issues with the two solutions, we will however provide features and support to transition as smoothly as possible, but you will have to recreate the agent.

:purple_circle: Q25: So we will be able to do everything we want with agent but now everything will be in the scenario itself?

A: Yes, exactly. So no more “black box” such as the soon-to-be-deprecated agent was and you will have full observability and transparency inside of a scenario builder.

:purple_circle: Q26: What’s the difference between the “Crawl a website” and web search, or am I confused?

A: A Web-Agent module (Make web search) (LLM + web search/browsing) is great for getting a one-off answer (find a few relevant pages, read them, summarize/reason). A web-scraper (Apify/Firecrawl) is for repeatable, scalable data extraction (crawl many URLs, handle site structure/JS, export structured datasets).

:purple_circle: Q27: So essentially does this mean we could create a scenario with just about bunch of agents?

A: Yes and that is how you should use them in complex workflows – by splitting the tasks as much as you can so each of the agents will have a simple goal.

:purple_circle: Q28: All the instructions on the prompt must be in English?

A: No you can use any language you want – but it is up to the engine (LLM) you select to work well with given language.

:purple_circle: Q29: What’s the difference between the Firecrawl and AI web search? I don’t understand why they’re both there.

A: A Web-Agent module (Make web search) (LLM + web search/browsing) is great for getting a one-off answer (find a few relevant pages, read them, summarize/reason). A web-scraper (Apify/Firecrawl) is for repeatable, scalable data extraction (crawl many URLs, handle site structure/JS, export structured datasets).

:purple_circle: Q30: Is there some sort of .xaml or .xml file (or other) where you can “back up” your scenarios?

A: You can export your scenarios as blueprints (in JSON) if you want to have them locally stored.

:purple_circle: Q31: Initially the context was 1 hour per lead, 150 leads. Is this scenario a web scrape for each lead separately. In other word’s is the agent scraping across my 150 leads?

A: If it would be fully automized then yes, you would feed it the 150 leads and it would conduct the research for you.

:purple_circle: Q32: Which GPT should I use for instructions and support related to WF and AI Agents?

A: A rule of thumb is to select the best performing model to test the agent and then try to downgrade models to optimize for price.

:purple_circle: Q33: I’d like to know how I can ask for information on how to create an automated telephone agent.

A: We will actually have such an agent in our agent library that will be releasing soon. So stay tuned!

:purple_circle: Q34: Please update us more on IT security.

A: At Make we have a non-training agreement with all LLM providers so your data will not be used to train models.

:purple_circle: Q35: Is there detailed documentation on what each element of a module means?

A: Yes - I recommend to go through Make Academy where we explain the principles of building agents in more detail.

:man_raising_hand: … Anything to add?

If you have additional comments or questions, feel free to comment and start a discussion below this post! We will be checking it out in the coming weeks, and we’re happy to help! :slight_smile:

2 Likes