Trying to speed up my scenario
I notice that in the module “Message an (OpenAI) Assistant” the assistant takes much longer to finish his response than in the playground. Roughly about 5x as long. Is there anything I can do about this?
Trying to speed up my scenario
I notice that in the module “Message an (OpenAI) Assistant” the assistant takes much longer to finish his response than in the playground. Roughly about 5x as long. Is there anything I can do about this?
Hi Samuel,
I would suggest that you focus on reducing the complexity of your prompt and setting a lower max token limit. This ensures OpenAI can start streaming responses earlier and finish quicker.
Do your best to keep your messages brief as possible to minimize processing time