Issue with Output Length in ChatGPT GPT-4o Module

Hello to the community!

I would like to create a document of about 20,000 words using ChatGPT and Google Docs.

The question I have is:

A ChatGPT module with the GPT-4 model is supposed to have a context window of 128,000 tokens, with 4096 tokens for output if I’m not mistaken.

And 4096 tokens are supposed to make between 2000 and 3000 words (approximately).

How is it that I only get 400 to 800 words output from a ChatGPT module (far from the 2000 to 3000 words)?

Is there something I didn’t understand correctly?

Thank you :pray:

Welcome to the Make community!

You need to provide more information on your inputs.

1. Screenshots of module fields and filters

Please share screenshots of relevant module fields and filters in question? It would really help other community members to see what you’re looking at.

You can upload images here using the Upload icon in the text editor:

2. Scenario blueprint

Please export the scenario blueprint file to allow others to view the mappings and settings. At the bottom of the scenario editor, you can click on the three dots to find the Export Blueprint menu item.

(Note: Exporting your scenario will not include private information or keys to your connections)

Uploading it here will look like this:

blueprint.json (12.3 KB)

3. And most importantly, Input/Output bundles

Please provide the input and output bundles of the trigger/iterator/aggregator modules by running the scenario (or get from the scenario History tab), then click the white speech bubble on the top-right of each module and select “Download input/output bundles”.

A.

Save each bundle contents in your text editor as a bundle.txt file, and upload it here into this discussion thread.

Uploading them here will look like this:

module-1-output-bundle.txt (12.3 KB)

B.

If you are unable to upload files on this forum, alternatively you can paste the formatted bundles in this manner:

  • Either add three backticks ``` before and after the code, like this:

    ```
    input/output bundle content goes here
    ```

  • Or use the format code button in the editor:

Providing the input/output bundles will allow others to replicate what is going on in the scenario even if they do not use the external service.

Following these steps will allow others to assist you here. Thanks!

samliewrequest private consultation

Join the Make Fans Discord server to chat with other makers!

Hi,

  1. The API “Chat Completion” doesn’t work like your browser’s UI. Each run is a new “conversation,” so the context window only retains data passed in a few user and assistant messages.

  2. Have you set the max token limit to 0/ 4096 in your scenario?

  3. Please also refer to the API rate limits- maybe you are exceeding them.