I aim to re-execute the OpenAI module whenever the response length exceeds 3000 characters, which is the limit for LinkedIn posts. I intend to repeat this process until the response length from the OpenAI module gives us less than 3000 characters.
I welcome suggestions for streamlining this process or exploring alternative approaches.
Open AI doesn’t know how to count words, it counts chunks of words which can be a few letters or a whole word.
You can set maximum tokens in GPT module in advanced settings. If you set the maximum tokens to 500 in principle it will not exceed 3000 characters. To be sure, you can set an error handling and in error handling ( when the LinkedIn module gives you an error that the text is too big) repeat the process Chat GPT-LinkedIn modules with a lower number of tokens, say 300.
Don’t hesitate to contact us if you require further assistance.
//VLAD
Absolutely, taking that information into account, I gave ChatGPT a specific prompt:
LinkedIn post cannot exceed 3,000 characters. Write at least 1000 characters
Here’s the thing: Sometimes, the response I get from ChatGPT is shorter than 3,000 characters. If that happens, no problem. But if it’s longer, I’m want to keep asking ChatGPT for a shorter response until it finally gives me one that’s under 3,000 characters.
Solution:
My goal was to make a LinkedIn post in one execution with getting content from OpenAI.
I included a “repeater module” that keeps trying something until the response it gets is shorter than 3000 characters.
So, whenever the content length meets the required standards, it gets posted on my LinkedIn profile. To make sure this process doesn’t go on indefinitely(repeater), I introduced an “HTTP module” that connects to a non-existent web address. It will give an expected error and to handle that error add commit module to it.
I’m glad to report that scenario is running as I planned – everything is working the way I wanted.
It might not fulfill the exact requirement, but it will do its best to provide a response based on the given prompt. Best way it do repeat the task until we get our response with desired length.
That’s a novel way of using the commit module to stop the scenario, by forcing an error. Good job figuring it out, and thanks for sharing your solution!