🎥 Recording & Recap: Make AI Toolkit & Content Extractor in Action

Hey Makers :waving_hand:

A huge thank-you to everyone who joined our first-ever Community Live Session!
It was amazing to see so many of you there. For those who couldn’t make it, or if you’d like to revisit what we built, we’ve got everything you need right here.

:movie_camera: Full recording

Catch the full session with our expert, @Benjamin_from_Make , right here:


:memo: Session summary

In this session, Ben walked us through a real-world scenario that automatically analyzes, describes, and organizes a messy folder of photos. He showed us how to build a workflow that:

  1. Watches a Google Drive folder for any newly added images.
  2. Uses Make AI Content Extractor to generate a detailed description and detect if a person is in the photo.
  3. Sorts the images by automatically moving them into new “With Human” or “Without Human” subfolders.
  4. Uses Make AI Toolkit to summarize the description, translate it into another language, and generate a clean, readable filename.
  5. Logs all the new data (descriptions, filenames, categories, etc.) into a searchable Airtable database.

→ Check out the step-by-step use case, recreated using only free tools, with all the resources you need to try it yourself: How to build an Images organizer using the new Make AI Toolkit and AI Content Extractor apps


:sparkles: Your time to shine

This session also kicked off our brand-new Community Challenge! Take what you learned, put it into practice, and you could become our next Master of Make!

→ Join the challenge: 🚀 Community Challenge: AI Toolkit and AI Content Extractor in Action


:man_raising_hand: Questions answered during the session

:purple_circle: Q1: If I use the Make AI model in my scenarios, will it have a memory? (i.e., make similar names across all different pictures that it names)

  • A: No, the AI Toolkit and Content Extractor modules are “stateless,” meaning each run is independent and has no memory of previous runs. To create memory, you have two options: 1) Store previous results in a Data Store and feed that information back into a custom prompt, or 2) For more advanced, conversational memory, use the AI Agents feature, which is designed for this.

:purple_circle: Q2: Can I get the content (OCR) of retail receipts from a photo taken by a mobile phone?

  • A: Yes, absolutely. The Make AI Content Extractor has a dedicated module to “Parse a Receipt” for this exact purpose. For more custom needs, you can also use the “Describe an Image” module with a specific prompt telling the AI exactly what information you want to extract.

:purple_circle: Q3: Is there a reason why you would use Airtable instead of Google Sheets for updating the records?

  • A: Google Sheets would also work perfectly for this use case. Ben chose Airtable for two main reasons: 1) Airtable’s “Interfaces” feature allows you to create polished visual dashboards for your data, and 2) as a true database, it offers more advanced and efficient searching and filtering capabilities than a spreadsheet.

:purple_circle: Q4: Do you have a Google Drive alternative for non-paid Google Workspace users?

  • A: Yes! The scenario in this demo was built using a free, personal (non-paid) Google Drive account, which is fully operational with Make. You can also use other cloud storage services like Dropbox or Box, but depending on the service’s sharing permissions, you may need to add a “Download a File” module to your scenario.

:purple_circle: Q5: What is the possibility of the model using the same name for 2 images? And how does it handle when it encounters that?

  • A: While it’s unlikely, it is possible for the AI to generate the same name for two different images. If your destination system requires unique names, the best practice is to build a simple check into your scenario. Before renaming or creating a record, you can search to see if the name already exists. If it does, simply append a unique identifier like a timestamp or a random string of characters to the filename.

:purple_circle: Q6: Can you please show how many credits were spent on completing the operations in your example?

  • A: Yes. The final run of the scenario, which processed 8 images, consumed 91 credits. In total, including all the tests, the entire build process consumed 126 credits. As a best practice, always limit your trigger to 1 item while you are building and testing a new scenario to save credits.

:man_raising_hand: Remaining questions + answers

As promised, here are answers to the questions we didn’t get to live:

:purple_circle: Q1: Is the Make AI Toolkit an alternative to an LLM?

  • A: Make AI Toolkit and an LLM serve different purposes and work in different ways.

Make AI Toolkit

  • It’s an app with pre-built AI actions designed to be simple and ready-to-use. Each module comes preset with everything it needs (hardcoded), so minimal setup is required.
  • As an alternative, you could use a third-party AI app (like OpenAI, Anthropic, etc.). These require you to have an API key to create a connection and might be more complicated to set up, but are almost as flexible as the API allows

LLM

  • is a model with specific set of parameters that drive the performance, speed, intelligence and cost
  • for Make AI Toolkit everybody can choose to use Small / Medium / Large models provided by Make or, if on PRO, TEAMS, ENT plans, users can create their own connections with their API keys to all supported 3rd party LLM providers
  • OpenAI app can then only be user obviously with OpenAI API key, Anthropic with the Anthropic etc…

:purple_circle: Q2: Do you have a “verbal summary”?

  • A: Yes, you can use the Make AI Toolkit > Ask Anything to propose your own prompt. Here is an example prompt for a “verbal summary” (generated with the help of Gemini):

Act as if you are giving a verbal summary of the following text.
Your summary should be concise and spoken in a clear, conversational tone, like you’re explaining the key points to a colleague in a meeting. Focus only on the most essential information: the core argument, the main findings, and the overall conclusion.
Avoid getting bogged down in minor details, specific data points, or direct quotes. The goal is to give someone a quick and accurate understanding of what the text is about in just a few sentences.
Here is the text: [Map the text here]

:purple_circle: Q3: When will these apps be available in an OEM version?

  • A: At the moment, we don’t have a definite answer for this. However, we’ve shared that there’s interest internally, so hopefully we’ll be able to provide an update about this soon.

:purple_circle: Q4: Can we find modules by providing some kind of descriptive prompt of our expectations?

  • A: Yes, you can use the AI Assistant that appears on the bottom-right of the scenario builder. For example, you can ask the following:
    With the AI Toolkit app, what module should I use if I want to check whether a text is positive or negative?

:purple_circle: Q5: Can you discuss the costs [units] for using the Make AI tools? For example, each module does a separate action (sentiment, extraction, etc) - if we tap into an API of an external source, it may be able to complete all of that in one action.

  • A: The initial goal of the Make AI Toolkit and the Make AI Content Extractor are to simplify the use of AI, leveraging pre-defined prompts for specific use-cases. You consume 1 credit for each module if you leverage a third-party LLM (only available for the AI Toolkit in Pro plans and above), and you consume a range of credits when leveraging the Make AI Provider, depending on the module and the volume of data. If you are advanced and want to optimize costs, you can definitely use the Make AI Toolkit / Ask Anything, and have it perform multiple actions in one single call. If you keep using the Make AI Provider, the number of credits will be based on the number of tokens used; if you use your own LLM, it will consume a single credit in Make, but you will consume tokens with your LLM subscription. It all depends on the complexity you want to achieve. And, of course, you can still use any other Make AI apps (OpenAI, Claude, Groq, etc); each module will consume 1 credit when executed.

:purple_circle: Q6: Do you have recommendations for when it’s better to use Make’s own AI versus using an external LLM provider’s API?

  • A: Sure. For example, if you need to generate a binary data (audio, image, video, pdf, etc), you won’t use the Make AI Toolkit nor the Content Extractor apps, since they are not able to do it. Another case is when you already have an external provider and want to use it. Only the Make AI Toolkit in Pro plans and above allow you to create a connection to it. If you are in Core or Free plan, or if you want to extract information from binary files, you will prefer use any other Make App instead of the Make AI Toolkit and Content Extractor apps.
    The Make AI Content Extractor may also have limits (maximum file size, maximum resolution) that would limit your usage if you want to handle large files. See this article where all limits are explained: Make AI Content Extractor - Apps Documentation.

:purple_circle: Q7: Can we do this? For example, if I have a PDF with a lot of images with a description of text, and I want to find them externally. If I say I want to see this kind of image, can I correlate with it and show me the exact thing in an app or in this scenario?

  • A: Currently, the functionality you’re asking about isn’t supported. However, we’re always exploring ways to improve and will keep the community updated on any future developments.

:purple_circle: Q8: I am interested in using an online folder to review documents to make sure a prescribed sequence of documents has been completed, and review the content of those documents to prepare a report.

  • A: This is achievable with Make, mixing a scenario with AI. Would build a scenario that lists the files from the specific folder, and would use a mix of Content Extractor / Extract text from a document and AI toolkit / Ask Anything, to extract information from each file and ask AI to analyze them (e.g., is the file complete? It should contain this and that…). Note that the Online drive you use has to be accessible via an API (e.g., Box, Google Drive, Microsoft Onedrive, etc.)

:purple_circle: Q9: Is it possible to extract a similar kind of information from lesser-known image formats like TIFF or RAW?

  • A: A lot of different formats are supported. TIFF is one of them. RAW is not supported. For the full list of formats supported, and also the limits in size of files, you can refer to the following Help Center article: Make AI Content Extractor - Apps Documentation.

:purple_circle: Q10: Can I add an MCP layer to give the model context, e.g., a file with all names? This would be more dynamic than engineering a prompt.

  • A: If your question is about consuming a third-party MCP tool, yes, you can use the Make MCP Client app.

:purple_circle: Q11: How good is this at bypassing Captcha puzzles that ask you to pick every picture with a certain object in it?

  • A: Usually, automation solutions like Make leverage APIs; they don’t directly access the websites. This means that there is no concept of Captcha when they try to get a file.

:purple_circle: Q12: Instead of Claude or Chat GPT, can I use an LLM aggregator such as Poe or so?

  • A: You can use any LLM, as long as it provides an API and you have the relevant subscription to it. Make provides an App for most LLMs. For the specific case of POE, they provide an OpenAI-compatible API (OpenAI Compatible API | Poe Creator Platform), which is supported by the Make AI Toolkit.

:purple_circle: Q13: Is there a way to automatically anonymize PDF documents, i.e., redact names and personal data, before evaluation? I believe Adobe offers such a function in its Pro version. Can this be integrated?

  • A: There is no automatic way to anonymize the text extracted from a PDF when you use the Make AI Content Extractor. However, you can add the Make AI Toolkit / Ask Anything, and ask it to anonymize any personal information extracted in the previous step.

:purple_circle: Q14: Is there a list of use cases that are already created?

  • A: You can check out the “Helpful Resources” section of this post for some example use cases.

:purple_circle: Q15: I notice this workflow has 12 modules - as we’re building things, what number of modules in one workflow is too many?

  • A: There is no specific rule regarding the number of modules, but we recommend not having too big scenarios, because they may become harder to maintain and debug. We recommend the use of sub-scenarios to split scenarios into smaller services. See here for more information about sub-scenarios: Subscenarios - Help Center

:purple_circle: Q16: Does this only work for pictures or for videos, too?

  • A: The Make AI Content Extractor is not able to handle video as of now, so only images can be used in that case. If you need to handle videos, you will need an external LLM and use the relevant Make App.

:toolbox: Helpful resources


:thinking: We’d love to hear your thoughts

What was your biggest takeaway from the session? Share your thoughts or ask follow-up questions below. :backhand_index_pointing_down:


:tear_off_calendar: Save the date

Want to learn more about using AI for business impact?

Join a webinar on Wednesday, September 24th, 3:00 PM CEST, where the Make team will showcase 3 proven AI use cases that boost performance, generate ROI, and save time.

:backhand_index_pointing_right: Save your spot!

5 Likes