I dont understand tokens, and token limits

Hey im having trouble understanding tokens, and token limits for chatgpt modules. Im generating article summaries for one of my scenarios, the articles could be from 2 pages to larger ones like 30 pages and I want lengthy responses generated. On the open ai website it says something about 128k token limit, but on Make.com there’s a 4096 limit?

In summary, I want realllllyyyy long and detailed responses. Whats the best way to do that, what chatgpt module would be best?

blueprint 4.json (94.1 KB)
ch

You are confusing Context Window and Output limits.

128k tokens is the Context Window limit.

4096 (4k) tokens is the Output limit.

You need to use multiple modules to return more than 4096 tokens.

Hope this helps! Let me know if there are any further questions or issues.

@samliew

P.S.: Investing some effort into the Make Academy will save you lots of time and frustration using Make.

What would be a good way to use multiple modules without screwing up the output?

I would suggest you first ask GTP to break up the article into sections. Using the create structured data module and having it an array is probably the most consistent way to do this.
Then you can use an iterator to have the gtp summarize each of the sections and use an aggregator to piece it all together.
But may I ask why do you need a summary that can be up to 30 pages long?

yeah i dont know why either, just what a client of mine wants, I explained that a 30 page summary doesn’t make much sense but he’s very intent on it being 30 pages