Chat Completion Module Prompt - Not the same results as ChatGPT UI

Hi everyone,

This is my first post, and I’m hoping someone can help.

I created a prompt in the Create a Chat Completion module to compare two JSON files:

  • One file contains a borrower’s qualifications (e.g., credit score, loan amount).
  • The other file defines disqualification criteria (e.g., credit score must be > 600).

The goal is to generate a JSON list of borrower disqualifications.

I tested the same prompt in both the Create a Chat Completion module and the ChatGPT user interface, using identical parameters (model, temperature, etc.). However, the results are inconsistent—ChatGPT’s interface produces accurate results, while the Make.com module generates does correctly evaluate disqualifications and gives me incorrect results.

Does anyone know if there are differences between how ChatGPT operates in Make.com versus its UI? If so, how can I work around these inconsistencies?

Thanks in advance!

Hey there,

yeah the Make modules have a selection of different models they can use, so they can get different results compared to the results you are getting from the chat window.

I strongly suggest you look into personal gpt assistants and training one on your data to improve the results.

I’d like to remind you that the last time a company implemented an AI to auto deny insurance claims, its results were so abysmal that a customer murdered the CEO. So do think about what the target audience for this is and is this a process that should be automated or is it better to be handled by a person.

Thanks Stoyan. Yes, the strange thing is that in the results that I;m getting I’m using the same ChatGPT model in Make that I have via the ChatGPT interface. Regarding the insurance claim scenario… any work I do is for human-in-loop productively gains and not AI driven decision-making without human checks and balances. Good point though!