How to re run previous module until a condition is met?

I aim to re-execute the OpenAI module whenever the response length exceeds 3000 characters, which is the limit for LinkedIn posts. I intend to repeat this process until the response length from the OpenAI module gives us less than 3000 characters.

I welcome suggestions for streamlining this process or exploring alternative approaches.

Hello @M.Jasani nice to meet you.

Open AI doesn’t know how to count words, it counts chunks of words which can be a few letters or a whole word.

You can set maximum tokens in GPT module in advanced settings. If you set the maximum tokens to 500 in principle it will not exceed 3000 characters. To be sure, you can set an error handling and in error handling ( when the LinkedIn module gives you an error that the text is too big) repeat the process Chat GPT-LinkedIn modules with a lower number of tokens, say 300.

Don’t hesitate to contact us if you require further assistance.


Is it possible to include in your prompt something like “keep the result under 3000 characters, including spaces and punctuation”?


Absolutely, taking that information into account, I gave ChatGPT a specific prompt:

LinkedIn post cannot exceed 3,000 characters. Write at least 1000 characters

Here’s the thing: Sometimes, the response I get from ChatGPT is shorter than 3,000 characters. If that happens, no problem. But if it’s longer, I’m want to keep asking ChatGPT for a shorter response until it finally gives me one that’s under 3,000 characters.

My goal was to make a LinkedIn post in one execution with getting content from OpenAI.

I included a “repeater module” that keeps trying something until the response it gets is shorter than 3000 characters.
So, whenever the content length meets the required standards, it gets posted on my LinkedIn profile. To make sure this process doesn’t go on indefinitely(repeater), I introduced an “HTTP module” that connects to a non-existent web address. It will give an expected error and to handle that error add commit module to it.

I’m glad to report that scenario is running as I planned – everything is working the way I wanted.

Absolutely, we can provide ChatGPT with a prompt:

Constraints: LinkedIn post cannot exceed 3,000 characters. Write at least 1000 characters.

Example Output length:
Response 1- 3500char
Response 2- 2700 char
Response 3- 2100 char

It might not fulfill the exact requirement, but it will do its best to provide a response based on the given prompt. Best way it do repeat the task until we get our response with desired length.

1 Like

That’s a novel way of using the commit module to stop the scenario, by forcing an error. Good job figuring it out, and thanks for sharing your solution!


Heya @M.Jasani :wave:

Wow! I am impressed by the fact you were able to crack this. It is truly great to see Makers improve and get more proficient at using Make.:relieved:

Thanks a lot for keeping the community in mind and coming back here with additional information and solution.
Great job and keep it up!

1 Like