OpenAI Assistant slower in Make than in Playground

Trying to speed up my scenario :rocket:

I notice that in the module “Message an (OpenAI) Assistant” the assistant takes much longer to finish his response than in the playground. Roughly about 5x as long. Is there anything I can do about this?

Hi Samuel,

I would suggest that you focus on reducing the complexity of your prompt and setting a lower max token limit. This ensures OpenAI can start streaming responses earlier and finish quicker.
Do your best to keep your messages brief as possible to minimize processing time