Hello everyone,
I hope you’re all doing well! I’m currently working on a project where I aim to generate concise product descriptions using ChatGPT via make.com’s automation scenario. My process involves passing lists of product names (ranging from 3 to 10 items) to ChatGPT, expecting to receive descriptions for each product.
However, I’ve encountered a limitation where ChatGPT can only generate descriptions for up to four products at a time. I’m seeking advice on how to automate the process further to ensure that if a list contains more than four products, ChatGPT continues to generate the missing descriptions without manual intervention.
Is there a way to check whether all product descriptions have been written and, if not, automatically prompt ChatGPT to complete the task? Alternatively, are there strategies for splitting the list into smaller groups of three products and submitting them either sequentially or concurrently in different messages?
Any insights, tips, or examples of similar automations you’ve implemented would be greatly appreciated. I’m keen to make this process as efficient as possible and would love to hear from your experiences.
Thank you in advance for your time and assistance!
Best regards,
Artem
1 Like
Hello @Artem_Larin and welcome to the community!
I’d say the limitation is not “4 products at a time”, but rather you hitting the available context window of the ChatGPT 3.5 model. You could try using a larger model, such as GPT-4-32k or gpt-4-0125-preview with a crazy large 128k token window.
Or, maybe a more likely case, is that you have not changed the “Max tokens” property of the Create a Completion module (see screenshot) from the default 300 which is really low. Try playing around with this number and pump it up it to thousands. You can also try changing the max tokens config in OpenAI’s playground and see what number works best for you.
Alternatively, you might want to do it one by one. This could be achieved by placing an iterator to your scenario to iterate over all of the products and calling ChatGPT for each of those.
Let me know if I’m making sense.
Cheers!
2 Likes
Hello @Artem_Larin nice to meet you.
-
As an idea you can iteratively send one by one and have a promt in which you give the big context, what the project is about, products etc and the small context - that is that specific description.
-
Send them as manageable and equal batches, and in response ask for a JSON response with the keys.
{“product1”: <product 1 description>,
“product2”: <prodcut 2 description>,
“product3”: <prodcut 3 description>}
- You can send the rows in a predictable number of products, for example there are either 3 or 10, you have to do a step beforehand where you check how many products there are and build GPT that response statement with keys in JSON in a variable way.
That is, that :
{“product1”: <product 1 description>… } be built automatically, in script mode depending on how many products are passed through the automation.
Don’t hesitate to contact us if you require further assistance.
//VLAD
2 Likes
Well, given that ChatGPT has a cap on handling up to four products at a time, breaking down your list into smaller chunks seems like the way to go. Here’s how I see it could work:
Batch Processing: This is like dividing your grocery list into smaller parts, so you only grab what you can carry in one trip. You could write a script that splits your big list into groups of three or four. This way, you send over a manageable number for ChatGPT to handle, wait for it to finish, and then send the next group. It’s a bit like waiting in line at the checkout; it keeps things orderly but might take a bit more time.
2 Likes