How to Access Multiple Free AI Models at Once

As creators, we’re always seeking ways to enhance our Make automations without overspending. While closed models like OpenAI are popular, open-source language models are proving to be powerful alternatives, especially for tasks that don’t require paid options.

One tool that has been a game-changer for me is OpenRouter, which allows access to multiple free language models simultaneously. This enables leveraging the strengths of each model for different tasks, supercharging automations.

In this guide, I’m going to show you step-by-step how to use OpenRouter to make the most of these free, open-source language models and give your Make automations a serious boost.

So, what exactly is OpenRouter?

OpenRouter is a unified interface that allows you to access multiple open-source language models (LLMs) in one place. It’s like a hub where you can find the best free/paid models for your Make automations without needing to manage multiple API keys or switch between different platforms.

With OpenRouter, you can easily compare and select the most suitable models based on your specific needs and budget. It simplifies the process of finding and using powerful LLMs, saving you time, effort, and money.

One other cool thing about OpenRouter is that it uses the same API style as OpenAI, making it a breeze to integrate into your existing Make automations without significant changes to your existing scenarios.

Another great feature of OpenRouter is that you can browse their platform at to see the available models, pricing, and even test them directly before adding them to your Make scenarios. This allows you to find the best fit for your specific needs and budget without any guesswork.

Testing and Comparing Free Models from OpenRouter with Openai

In this guide, we’ll put OpenRouter’s free models to the test and compare them with the GPT module. By doing so, we’ll gain insights into the performance and capabilities of these open-source alternatives, helping you make informed decisions when choosing the right model for your Make automations.

To make the integration process even smoother, we’ve developed a custom OpenRouter app specifically designed for Make. For a small fee, you’ll have access to a seamless integration that simplifies the process of incorporating OpenRouter into your Make workflows. This app streamlines the setup process, saving you time and effort, so you can focus on building powerful automations.

Throughout this guide, we’ll walk you through the steps of setting up the OpenRouter app, testing different models, and comparing their performance with GPT-4 Turbo. We’ll also provide practical example and use case to demonstrate how these models can be leveraged in real-world scenarios.

By the end of this guide, you’ll have a clear understanding of the strengths and limitations of OpenRouter’s free models and how they stack up against GPT-4 Turbo.

Now, After installing the OpenRouter Make app, log in with your account. You can use Google OAuth for a smooth authentication process. Before authorizing OpenRouter, you have the option to set a limit on your spending if you plan to use paid models.

However, if you only intend to use the free models, simply click “Authorize” without the need to create an API key. OpenRouter will generate an API key directly for you, allowing you to start using the app immediately.

It’s worth noting that you can visit your OpenRouter dashboard to view the API key that has been created for you. This dashboard provides an overview of your account details and allows you to manage your API key if needed.

With the OpenRouter Make app set up and authorized, you’re now ready to explore and utilize the various free models available through OpenRouter.

The models we’ll be focusing on In this comparative test are:

    1. Google: Gemma 7B (free)
  • 2 Mistral: Mistral 7B Instruct (free)
    1. Meta: Llama 3 70B Instruct (nitro) (paid) $0.9/M input tkns/$0.9/M output tkns
    1. ChatGPT GPT-4 Turbo (paid) $10/M input tkns / $30/M output tkns

So, the test we’ll conduct is generating SEO-optimized titles and meta descriptions . If you’re in the SEO industry, you know how crucial it is to have well-crafted titles and meta descriptions for your blog posts.

Throughout the test, we’ll keep track of how each model performs, saving the output in a Google Sheet that you can review later.

Let’s begin the test and see how these models tackle this critical aspect of SEO!

blueprint (1).json (71.3 KB)

In this test, we’ll provide each model with the following prompt to ensure a fair and consistent comparison:

As an AI content optimization assistant, your task is to generate SEO-optimized titles and a meta description for the given blog post. Follow these steps:

Carefully analyze the blog post to identify the main topic and key points.
Determine the most relevant, keyword based on the content. This will be the "main keyword."
Create 5 compelling, click-worthy title variations, each including the main keyword at the beginning and not exceeding 57 characters.
Write an engaging, informative meta description that incorporates the main keyword and entices readers to click through. The description should be a maximum of 150 characters.

Your output should include only the following, without labels or explanations:
Line 1: Title Variation 1 (max 57 characters)
Line 2: Title Variation 2 (max 57 characters)
Line 3: Title Variation 3 (max 57 characters)
Line 4: Title Variation 4 (max 57 characters)
Line 5: Title Variation 5 (max 57 characters)
Line 6: Meta Description (max 150 characters)

By using this standardized prompt, we can accurately assess each model’s ability . This approach will provide valuable insights into their performance and help you determine


From our observations, we noticed that among the free models, Mistral: Mistral 7B Instruct provides good results and follows instructions well.

For the cheap option, Meta: Llama 3 70B Instruct (nitro) works great, performing as well as the GPT-4 Turbo . It’s not surprising that it ranks as the number 3 trending model in OpenRouter’s marketing and SEO category

Refer to the Google Sheet to see the output for each model and compare their performance side by side.

One potential drawback of some free models on OpenRouter is their limited context window, with certain models restricted to just 8K tokens, which may not be sufficient for more complex or context-heavy tasks.

and the Free limit: If you are using a free model variant , then you will be limited to 20 requests per minute and 200 requests per day.

Finally, I hope this guide has given you an idea of how you can incorporate free and cheap open-source models into your Make scenarios. By leveraging the power of these models through OpenRouter, you can enhance your automations and achieve impressive results without breaking the bank.

If you need any assistance or have further questions, feel free to reach out to me at

I’d love to hear from you! Share your thoughts in the comments below and let us know which AI models you’re most excited to try out in your Make scenarios. Your feedback and experiences can help guide others in the community as they explore the world of open-source language models.