Hi MAKE Community,
I need to execute a job but can’t scale in the way I want.
Quick explanation:
- I check an excel file for rows. In each row are written product ideas (list of 500).
- GPT-4o then adds an idea description and other data for the original idea. This works well.
- We are providing around 10,000 characters context to enforce the model to ideate on-brand, on-strategy, on-audience. This works well.
- Every scenario run burns around 4,000 tokens due to the large context usage. No problem so far
- No it gets tricky: The Excel file from 1) has about 500 rows. Our sequence stops after about 10-12 outputs, because we reached the token limit.
So, instead of giving the task “Generate 500 Idea descriptions” once… How can I automate “Generate 1 Idea description” 500 times? My assumption: This way every sequence burns only 4-5k tokens and we always stay within in the limit. But I could not find a way to for e.g. check the amounf of rows of the excel file to then add this file to a loop. Also the Repeater function is not right for me because it works within the same sequence and kills my token limit.
Any ideas? Thanks a lot