Hi
I hope someone can help - im trying to find some infomation on functin calling with the V2 of the assistiants - in the scenarion function it asks for the funtion/tool url - however there is no help or guide on how to create this - im guiessing it needs to e a new scenario that uses a webhook and passes it to a api POST call however what needs to be passed through because i understand that outside the openai platform you need to pass through ID’s to the chat so that it can submit to the correct threat?
Can someone help? really getting stuck here
Here is the documentation that will give you directions how to get started with function calling.
https://platform.openai.com/docs/guides/function-calling
Hope this helps!
Hi that document helps with building the API through my own cloud functions however not really with the make.com setup or am I missing something?
Just to be clear I have the functions in the assistant setup, on my js based API I can make it work because it handles the function code within the assistant code,
I want to use the functions with the assistant in make.com but it asks for a URL for each function. This is different to how openai handles it
2 Likes
Hi Paul,
I can provide some detailed insights into how this module works. In addition, others have provided good overviews of how to set this up here and here.
Essentially, this Make module goes “above and beyond” what a typical Make module might do. The following happens if you enter a URL into the field for each function that your Assistant has configured.
- The module first gets information about the assistant
- Then creates a thread with the assistant with new messages, and runs the thread
- It then, continually checks for the status of the run until it is either complete or requires action.
- “Requires action” means the assistant responded with one or more functions that should be called with some JSON structure that was generated by the assistant.
- Make will then actually call the URL you provided for that function and pass in the JSON generated by the assistant. That URL could technically be any URL in the world…but it is quite convenient for the URL to be a Make webhook URL that can be used to take that data and return some data with a webhook response.
Note: it might be a bit unintuitive, but the webhook response should return data as plain text (not with content-type application/json). This is what is expected by OpenAI’s assistant API. (e.g. your response should be either text or string-ified JSON).
- Make will then take the response provided by those URLs and submit it to the Assistant API as “tool outputs”. This will queue the run again and Make will continue to check the status of the run again until either the run is complete or additional functions are required (steps 3 through 7 continue in a loop for as long as needed).
- Once the run is “complete”, Make will retrieve the final response from the Assistant API.
What this all means in simple terms:
- This module is one of the easiest ways to use Make to implement agentic behavior where an LLM is determining the right processes to call based upon its own logic.
- You can use Make scenarios to provide tools to your agent (these can be anything from looking up info in a CRM to sending an email).
- Make has included complex logic in this module to manage all of the complexity of potentially calling different functions based upon the response from the API. (pretty cool).
- A small word of caution – this capability is extremely powerful. And with great power comes great responsibility. These functions can do anything and they will do it if the Assistant thinks they should. You can easily be sending out emails with content you may or may not feel comfortable with in no time
.
Hopefully that is helpful. Cheers.
4 Likes