Help with Telegram and openAI

Hi everyone, I’m fairly new to Make but not to the automation world, however I really can’t understand how to make some stuff working and couldn’t find any resources or tutorials about this specific issue.

My goal is to create a Telegram bot to interact with OpenAI models, and so far the completion part is working, however I’m struggling to understand how to implement the GPT vision.
What I want is to send a photo from the Telegram bot, and have it analyzed from GPT vision module. So far I have the initial Telegram webhook \ listener, a router with the Completion branch, and the non-working branch for vision GPT. I have added the HTTP GetFile module to retrieve the Telegram image path and getting the output with status 200 and json data with the path, however everything I try after this point gives errors (status 400 on another HTTP getFile with the path /{{}} and obviously errors on GPT Vision for invalid image.

Apparently, I somehow need to parse the response from HTTP getFile to retrieve the binary file, encode to base64 and feed to GPT Vision module, but I haven’t figured out how to do so.
I understand I might be able to use less HTTP modules or not use them at all, but so far I’m stuck.
Any help is appreciated.

Welcome to the Make community!

Looks like it shouldn’t be that complicated.

I’ve asked the AI Assistant to generate a scenario for you.


You should be able to map the file directly:

blueprint.json (1.7 KB)


Oh my! Thank you so much, that worked :slight_smile:

1 Like

I will have more questions so I will add them here as they’re on the same topic.

For instance, is it possible to make GPT aware of the previous messages of the Telegram chat, or connecting multiple branches to an output (e.g. after using the Vision branch, the user replies with normal text for further clarification about the image on the Completion branch)?
So far I could only have one-message context.

Hmm I’m not quite sure how to help with that. You should create a new question.