When doing any sort of advanced operations with OpenAI APIs, we have to “compute” how many tokens we’ve used and how many are left for each model to work with.
It would be wonderful if Make.com implemented an internal “tiktoken” either module or function that could simplify this work for us:
GitHub - openai/tiktoken: tiktoken is a fast BPE tokeniser for use with OpenAI's models.