OpenAI max tokens

I’m using the HTTP module to make calls to the new OpenAI whisper model to transcribe call recordings, then passing it to the chatgpt 3.5 turbo model to summarize them.

This works perfectly for most scenarios, but for longer calls recordings/transcripts, it times out due to hitting the limit on tokens.

I see some information in the openAI docs about strategies to split and manage this, but was wondering if anyone had any functional examples of what that looks like in make?

Hi. If you are sending a voice file to transcript, there is no another option than split the file, download it and later do multiple requests for Open APIs transform the audio in text. Look at this page:


Thanks, Helio!
Wemakefuture
If you have questions reach out :wink:

1 Like

Yeah I had read that - in their api docs for whisper they include an example using a python library to split it and so on.

Just wondering if anyone has solved for this directly through make or not…

Hi @Nelson_Keating

This is an error from OpenAI. Hence, you need to contact their support to get more insights. However, to get the right parameters, please try the playground tool. Once parameters are confirmed, implement them in the HTTP module to avoid this error.

Best Regards,

Msquare Automation
:point_right: Visit our website | :spiral_calendar: Book a free consult

1 Like

Its not an openAI error at all - in the playground you cannot split files etc. I have working modules using HTTP but the issue is when they are two big, need the ability to chain them together

@Nelson_Keating One option is to use Cloudconvert to reduce the file size. Though I can’t get the HTTP to work with an attached file so probably will abandon Integromat for n8n (which works). Which HTTP parms worked for you?