What are you trying to achieve?
I’ve noticed an issue with the OpenAI o1-mini and o1-mini-2024-09-12 models on Make.com. Before, when using max_tokens, setting it to 0 would automatically allow the maximum tokens. But now, after the switch to max_completion_tokens, setting it to 0 doesn’t work as expected, and I have to manually set the token limit.
Has anyone else run into this? It was really convenient when 0 just meant “use the maximum.”
Would appreciate any insights or solutions!