My first post here, so I hope I am not breaking any community guidelines.
I am trying t run a flow in a custom AI agent (GPT) and in the module’s settings I have defined an output schema (make schema, json format).
That json should be returned as an output of the module inside the field jsonResponse., and i am having troubles with it.
There are two routes :
in the first one (when the user is new), in the ouput it returns the json as a string :
The settings from the two modules are the same, so i cannot understand what is causing the difference, or what I am missing.
I would appreciate any help
hi!
First of all, thank you for taking the time to reply.
It is enabled in both modules and set up with the the same json structure. Actually both modules are clones .
If you’re certain that all other parameters are identical, could you post the two prompts? I’m assuming these are different, as the promptTokens usage is different in both cases.
If settings are absolutely the same, try to run Agent module (one with incorrect output) using right click → run this module only.
It should force scenario to parse output.
Also, as mentioned by @David, are you sure prompts are the same?
Token usage can vary as you work on different user messages in it but maybe there is an extra character or other error which “crashes” JSON schema?
Sorry for my question but, what do you mean by the “two prompts” ?
Since it is the same agent, the system prompt is the same. The one given when creating the agent.
I run that module only, and the output’s structure was the same as in the other module.
I have to ask you what do you mean, where do you mean the “extra character” might be? I am asking because if you mean the propmt, the only prompt i am aware of is the System prompt used when creating the Agent, and since the agent is the same in both modules, I woudn’t even know how to differentiate prompts in each.
Sorry if my question sounds too naive, i am still starting with the AI Agents from make