Chat GPT output cut short

Hi, im trying to make a scenario where when a new record is inputted (quick brainstormed blurp for a video idea), based on the idea gpt will analyze the idea, expand upon it, and give me potential scripts/hooks. It is working pretty much as it should, the only problem is that the output to my gpt is getting cut short, i am not sure where to troubleshoot the problem, idk if theres a cap on either make or openai’s end that is cutting my output short, i am also not sure if there problem is that it goes from chatgpt to chat gpt to chat gpt, possibly clogging it? (idk if that can happen or not) this is the only thing wrong with my scenario so once this is fixed it should be working properly. I appreciate any help or feedback (also im not sure if this automation is as effecient as it can be, and if there is no cap to the output of a gpt module, i might consider getting all the information i need outputted in one message and then splitting up the information into my airtable from there (if its possible to slice up a message and send to seperate cells))

blueprint.json (113.0 KB)
this is my current blueprint

Welcome to the Make community!

It seems like you’ll need to increase the number of Tokens. This field might be hidden in the advanced settings.

If you need further assistance, please provide the following:

1. Screenshots of module fields and filters

Please share screenshots of relevant module fields and filters in question? It would really help other community members to see what you’re looking at.

You can upload images here using the Upload icon in the text editor:
Screenshot_2023-10-07_111039

2. And most importantly, Input/Output bundles

Please provide the input bundles of the OpenAI module by running the scenario (or get from the scenario History tab), then click the white speech bubble on the top-right of each module and select “Download input/output bundles”.
Screenshot_2023-10-06_141025

A.

Save each bundle contents in your text editor as a bundle.txt file, and upload it here into this discussion thread.

Uploading them here will look like this:

module-1-input-bundle.txt (12.3 KB)
module-1-output-bundle.txt (12.3 KB)

B.

If you are unable to upload files on this forum, alternatively you can paste the formatted bundles in this manner:

  • Either add three backticks ``` before and after the code, like this:

    ```
    input/output bundle content goes here
    ```

  • Or use the format code button in the editor:
    Screenshot_2023-10-02_191027

Providing the input/output bundles will allow others to replicate what is going on in the scenario even if they do not use the external service.

Following these steps will allow others to assist you here. Thanks!

2 Likes


Hi this is a picture of my scenario, i was able to fix the issue where it does not output the full message thanks to your comment I appreciate your help. If you have any more insight on my scenario that would be helpful, I feel like theres a way to make it more efficient, maybe by having 1 gpt module and a long output, then split up the final output to send to the respective locations in airtable, but as it is it serves its purpose, so just any general improvements would be appreciated. thank you for your help :muscle:

2 Likes