Difference in GPT Usage Tokens when combining Modules?

I’m currently using Make to automate populating ChatGPT messages in Airtable.

Basic scenario

  • Pull data from Airtable
  • Input field 1 into GPT prompt, generate message
  • Input field 2 into GPT prompt, generate message
  • Input field 3 into GPT prompt, generate message
  • Input field 4 into GPT prompt, generate message
  • Input data back into Airtable

Instead of using 4 separate steps / modules for GPT, would I be better off using a single GPT module with separate “items” for each prompt/message?

I’m wondering if it would lower usage costs. Thanks!

Hey There @tkwitten ,
Hope you are doing well,
If you have separate GPT modules for each prompt/message, each module incurs its own token usage, and the tokens for each input and output are counted separately. On the other hand, if you use a single GPT module with separate items, the tokens for all items in that module are considered together.
Let me know if this helps.

2 Likes